Literature DB >> 30701726

Hyperspectral imaging for tissue classification, a way toward smart laparoscopic colorectal surgery.

Elisabeth J M Baltussen1, Esther N D Kok1, Susan G Brouwer de Koning1, Joyce Sanders2, Arend G J Aalbers1, Niels F M Kok1, Geerard L Beets1, Claudie C Flohil3, Sjoerd C Bruin4, Koert F D Kuhlmann1, Henricus J C M Sterenborg1,5, Theo J M Ruers1,6.   

Abstract

In the last decades, laparoscopic surgery has become the gold standard in patients with colorectal cancer. To overcome the drawback of reduced tactile feedback, real-time tissue classification could be of great benefit. In this ex vivo study, hyperspectral imaging (HSI) was used to distinguish tumor tissue from healthy surrounding tissue. A sample of fat, healthy colorectal wall, and tumor tissue was collected per patient and imaged using two hyperspectral cameras, covering the wavelength range from 400 to 1700 nm. The data were randomly divided into a training (75%) and test (25%) set. After feature reduction, a quadratic classifier and support vector machine were used to distinguish the three tissue types. Tissue samples of 32 patients were imaged using both hyperspectral cameras. The accuracy to distinguish the three tissue types using both hyperspectral cameras was 0.88 (STD  =  0.13) on the test dataset. When the accuracy was determined per patient, a mean accuracy of 0.93 (STD  =  0.12) was obtained on the test dataset. This study shows the potential of using HSI in colorectal cancer surgery for fast tissue classification, which could improve clinical outcome. Future research should be focused on imaging entire colon/rectum specimen and the translation of the technique to an intraoperative setting.

Entities:  

Keywords:  colorectal cancer; hyperspectral imaging; machine learning; margin assessment; support vector machine

Mesh:

Year:  2019        PMID: 30701726      PMCID: PMC6985687          DOI: 10.1117/1.JBO.24.1.016002

Source DB:  PubMed          Journal:  J Biomed Opt        ISSN: 1083-3668            Impact factor:   3.170


Background

Colorectal cancer is the third most commonly diagnosed cancer worldwide and the fourth cause of death due to cancer., For patients with colorectal cancer, surgery is the cornerstone of the treatment. In the last decades, laparoscopic surgery for colorectal cancer has become common practice. Randomized controlled trials have proven similar clinical outcomes for laparoscopic surgery as for open surgery with a decrease in hospital stay. One of the drawbacks of laparoscopic surgery is the reduced tactile feedback during surgery., The lack of tactile feedback makes tissue recognition more cumbersome, especially in areas where radical resection margins are often compromised, like in locally advanced tumors and in rectal cancer. Hence, an alternative technique that would enable the surgeon to distinguish tumor from normal tissue during laparoscopic surgery in real-time would be of great benefit to secure radical resection in difficult areas, such as in rectal cancer. We will investigate the use of hyperspectral imaging (HSI) as a tool to ensure radical margins in these circumstances, distinguishing tumor from healthy colorectal tissue. In HSI, a broadband light source is used to illuminate an object, like, e.g., tissue. The light will interact with the tissue by reflection, scattering, and absorption of the photons. This interaction strongly depends on the tissue type and wavelength. After several interactions within the tissue, part of the light will be reflected to the surface of the tissue and is detected by the hyperspectral camera. In the resulting hyperspectral image, the tissue specific spectral changes of the light can be analyzed. Ultimately, HSI result in 2-D images of the object obtained over several wavelengths, resulting in a 3-D datacube with two spatial dimensions and in the third dimension the wavelengths (Fig. 1).
Fig. 1

(a) Hyperspectral image, with two spatial dimensions and one spectral dimension (). (b) On the right side, the spectra of the two selected pixels are shown.

(a) Hyperspectral image, with two spatial dimensions and one spectral dimension (). (b) On the right side, the spectra of the two selected pixels are shown. Previous studies used HSI as a diagnostic tool in cancer of the cervix, breast, skin, tongue, head and neck,, gastric,, and colon and rectum. In colorectal cancer, most studies focused on HSI of the hematoxylin–eosin (H&E) pathology slides or tissue classification during endoscopy to distinguish tumor from healthy tissue. In the current ex vivo study, we investigated the use of HSI to differentiate normal colorectal tissue from tumor tissue in a surgical setting looking from the surface of the tissue instead of from the lumen of the colon. To this end, colorectal cancer samples obtained during surgery were imaged using HSI in the visible- to near-infrared region. The spectra obtained from these images were classified using a classification algorithm and were verified with histology. Finally, complete hyperspectral images were classified using the trained classifier. The ultimate goal is to develop a real-time technique for tissue identification in laparoscopic colorectal surgery.

Materials and Methods

Hyperspectral Cameras

Two hyperspectral cameras were used for the measurements, one in the visual wavelength range and one in the near-infrared wavelength range. The first was a SPECIM (Spectral Imaging Ltd., Finland) spectral camera (PFD-CL-65-V10E) with a wavelength range from 400 to 1000 nm, a CMOS sensor of , and a spectral resolution of 3.0 nm, hereafter referred to as the visual camera. The second was also a SPECIM spectral camera (VLNIR CL-350-N17E) with a wavelength range from 900 to 1700 nm, an InGaAs sensor of , and a spectral resolution of 5.0 nm, hereafter referred to as the near-infrared camera. Both cameras were push broom cameras, meaning that they image a single line (-axis) only. To obtain a full 2-D image, the samples were placed on a translational stage and pushed underneath the camera (-axis). All samples were illuminated using a halogen light source. Both a dark and white reference image were taken before each measurement. The dark reference image was taken by closing the shutter of the cameras. The white reference image was taken on a Spectralon reflectance standard. The linear behavior of the visual camera allowed for a simple calibration of the camera using Eq. (1). In Eq. (1), is the calibrated spectrum, is the original spectrum, is the dark reference, and is the white reference. The near-infrared camera, however, had a slight nonlinear behavior and was therefore calibrated using a fourth order polynomial, Eq. (2), instead of the linear formula. In Eq. (2), are variables determined using a series of five reference samples. The values of differ per pixel and wavelength. Furthermore, is the original spectrum and is the calibrated spectrum:

Study Protocol

Patients who underwent surgery for colorectal cancer in the Antoni van Leeuwenhoek—The Netherlands Cancer Institute (Amsterdam, the Netherlands) and the Slotervaart Medical Centre (Amsterdam, The Netherlands) were included in this ex vivo study. The study was performed under approval of the protocol by the institutional ethics review board. Immediately after colorectal resection, the entire resected specimen was taken to the pathology department. Cross-sections were cut by the pathologist from the specimen and three tissue samples were obtained; tumor tissue, healthy colon or rectal wall, and (pericolorectal) fat. The cross-sections were placed in a pathology cassette, where they remained during the entire data acquisition. All measurements were performed within 1 h after specimen resection. Before the hyperspectral measurements, an RGB image was taken of each tissue sample. Next, hyperspectral images were obtained from the tissue samples after which the samples were taken back to the pathology department. The samples were processed according to standard protocol in the same pathology cassette, to prevent large tissue deformations. The corresponding H&E slides were examined by the pathologist, who annotated the various tissue types. For further data analysis, the digitized annotated slides were registered to the RGB image and the RGB image to the HSI in MATLAB (version 8.5, MathWorks Inc., Natick, Massachusetts, United States), using a nonrigid transformation to overcome the effects of mechanical deformation of the tissue during the standard workflow of tissue processing and staining. Finally, the registered pathology slide was registered to the registered RGB image (Fig. 2). Using these registrations, each pixel from the hyperspectral image could be given a histological classification. To create a database of hyperspectral pixels, pixels were manually selected within areas that were defined by the pathologist as absolute certain for a tissue type. About 30 pixels per tissue type per patient were selected when possible. When the surface area of a tissue type was too small to select 30 individual pixels, less pixels were selected. Pixels in the mucosal layer were not taken into account because the mucosa will not be visible during the ultimate surgical application of the technology. Hence, the pixels from the healthy colorectal wall were all in the muscular layer.
Fig. 2

Registration of the HSI, RGB, and pathology images. In the upper row from left to right, annotated pathology image (yellow = fat, green = healthy colorectal wall, red = tumor, blue = mucosa), RGB image and HSI. The second row from left to right, the annotated pathology image registered to the RGB image and the RGB image registered to the HSI.

Registration of the HSI, RGB, and pathology images. In the upper row from left to right, annotated pathology image (yellow = fat, green = healthy colorectal wall, red = tumor, blue = mucosa), RGB image and HSI. The second row from left to right, the annotated pathology image registered to the RGB image and the RGB image registered to the HSI.

Data Preprocessing

Preprocessing of the data was performed using a 3.40 GHz Intel Xeon E3-1240 CPU processor and 16 GB RAM and consisted of two steps. First, the spectra were normalized using standard normal variate (SNV) normalization. SNV normalization was performed for each individual spectrum. First, the mean was subtracted from the spectrum after which the spectrum was divided by the standard deviation of the spectrum, see Eq. (3). Here, is the normalized spectrum, is the calibrated spectrum as given in Eq. (1) or Eq. (2), and and are the mean and standard deviation of , respectively. This normalization created a zero baseline and a variance equal to one for all spectra: After combination of the visual and near-infrared images, all outliers caused by specular reflection were removed. Outliers were defined as spectra with an average distance from the mean spectrum of more than three standard deviations, determined over all wavelengths. Next, in order to combine the spectra of the two cameras, the visual images were downsampled. Down sampling was necessary because of the higher resolution of the visual camera compared to the near-infrared camera. A rigid spatial registration was performed obtaining a pixel-to-pixel correlation between the images of the two cameras.

Data Analysis

Data analysis was performed using the PerClass toolbox (Academic version 5.0, PR Sys design, Delft, The Netherlands) in MATLAB. The data were randomly divided into a training and test set. Per patient, all spectra were assigned to either the training or test set, indicating that spectra from one patient were not split between the training and test set. The training set contained 75% of the patients and the remaining 25% was used as a test set. The data contained a hyperspectral image from both the visual and near-infrared camera. The development of the classification algorithm consisted of three steps. First, feature reduction was applied to the spectra to prevent overfitting of the classifier. For this purpose, k-means clustering was used to determine spectral bands. The clustering is based on the average intensity values of the spectra from the training dataset. For each cluster, if the wavelengths are not continued, the cluster will be divided into multiple spectral bands. The spectral bands were determined once on the training set and were also used on the test set. From these spectral bands, only the mean intensity was used as a feature in the classification algorithm. Second, fat was classified first using a quadratic classifier, which was optimized with a 10-fold cross-validation. The selected optimum in the ROC curve was the point with the lowest mean error. Third, a linear support vector machine (SVM) was used to distinguish the tumor spectra from the healthy colorectal wall spectra. The SVM was also optimized using a 10-fold cross-validation. Classification of the pixels was done based on the probability given by the classifiers. Pixels were assigned to the tissue type with the highest probability. The performance of the classifiers was compared using the area under the ROC curve (AUC), the accuracy, the sensitivity, specificity, and the Matthews correlation coefficient (MCC) [Eq. (4)]: In Eq. (4), TP, TN, FP, and FN are the number of true positives, true negatives, false positives, and false negatives, respectively. The MCC returns a value from to , where indicates a total disagreement and indicates a perfect prediction. The accuracy was determined in two different ways. The first accuracy was determined per tissue type and thereafter averaged. The second accuracy was determined per patient and averaged. Finally, to assess the contribution of each camera, the classification was also trained and tested on datasets containing data from only one of the two cameras. The performance of this classification was compared with the classification of the dataset containing data from both cameras, using the ROC curves and the performance measures.

Results

Patients

In total, 54 patients were included in this study: 27 men (50%), 27 women (50%), with a median age of 65.5 years (IQR: 60 to 73). The samples of 32 patients were imaged with both the near-infrared and the visual camera, 22 additional patients were imaged with only the near-infrared camera. Most of the tumors were located in the colon and sigmoid. One sample showed complete pathological response on preoperative treatment and thus, no tumor tissue could be taken from the sample. Patient and tumor characteristics are described in Table 1.
Table 1

Characteristics of the group of patients measured with the near-infrared camera and the group of patients measured with the visual camera.

  NIR cameraVIS camera
Total number of patients 5432a
GenderMale2714
Female2718
AgeMedian65.566.5
Interquartile range60–7360–71.5
Tumor locationCecum77
Colon2312
Sigmoid2111
Rectum32
Tumor typeComplete response11
Adenocarcinoma4625
Mucinous adenocarcinoma76
Tumor stagepT011
pT131
pT294
pT32616
pT41510

All patients measured with the VIS camera are also included in the NIR camera measurements.

Characteristics of the group of patients measured with the near-infrared camera and the group of patients measured with the visual camera. All patients measured with the VIS camera are also included in the NIR camera measurements.

Data Acquisition and Processing Time

The obtained tissue samples were first placed under the near-infrared camera and subsequent under the visual camera. Data acquisition times were 20 and 30 s for the near-infrared and the visual camera, respectively. In total, data of one patient were acquired in 1 min. The duration of image preprocessing for each tissue sample was 60 s in total for both cameras combined for all wavelengths. After training of the classifier, classification of the test data took 2 s per patient.

Classification with the Combination of Visual and Near-Infrared Camera

For the classification of the combination of the visual and near-infrared camera, only the tissue samples scanned with both cameras were included. The dataset of the combined visual and near-infrared camera images contained 2194 spectra, from 32 patients. After outlier removal, 2170 spectra were present in the combined dataset, of which 857 were taken from fat, 563 from muscle, and 750 from tumor. The training set consisted of 24 patients with a total of 1726 spectra. The test set consisted of the remaining 8 patients and 444 spectra. Due to the presence of noise in the lower and upper wavelength range of both cameras, the wavelength ranges of 450 to 950 and 970 to 1600 were selected to analyze for the visual and near-infrared camera, respectively. As shown in Fig. 3, the reflection spectra obtained with the visual camera and the near-infrared camera are not connected after calibration. This is related to differences in optical geometry of the two camera setups. In Fig. 3, the large difference in normalized intensity visible at 960 nm is caused by individual SNV normalization of both cameras before combining the datasets.
Fig. 3

Spectral bands determined for the dataset with the combination of visual and near-infrared camera images are shown together with the mean spectra of fat (yellow), healthy colorectal wall (green), and tumor (red). The 13 spectral bands all have a different gray value and are separated by black vertical lines. Between 950 and 970 nm, a gap is shown in the data. This region is not covered by the cameras.

Spectral bands determined for the dataset with the combination of visual and near-infrared camera images are shown together with the mean spectra of fat (yellow), healthy colorectal wall (green), and tumor (red). The 13 spectral bands all have a different gray value and are separated by black vertical lines. Between 950 and 970 nm, a gap is shown in the data. This region is not covered by the cameras. Based on the training set, 13 spectral bands were determined, as shown in Fig. 3. On the training data, the quadratic classifier obtained an MCC, AUC, accuracy, sensitivity, and specificity of 1.00 to separate fat from healthy and tumor. The SVM applied on the training dataset, to distinguish tumor from muscle, provided an MCC of 0.83, an AUC of 0.98, an accuracy of 0.91, and the sensitivity and specificity were 0.93 and 0.90, respectively. The accuracy of combination of the quadratic classifier and the SVM on the training dataset was 0.94 () when assessed on the tissue types. When determined per patient and averaged, a training accuracy of 0.94 () was obtained. The results of the combination of the quadratic classifier and the SVM on the test dataset are shown in Table 2. The accuracy determined per tissue type and averaged on the test dataset was 0.88 (). The accuracy calculated per patient and thereafter averaged was 0.93 ().
Table 2

Results of the combined classifiers on the test dataset with a combination of visual and near-infrared camera images. The mean accuracy averaged over the tissue types was 0.88 () and over the patients was 0.93 ().

 Decision based on hyperspectral data 
FatMuscleTumorTotal
Gold standardFat18700187
Muscle30814115
Tumor09133142
Total21790137444
Results of the combined classifiers on the test dataset with a combination of visual and near-infrared camera images. The mean accuracy averaged over the tissue types was 0.88 () and over the patients was 0.93 (). In Fig. 4, the results of the classification of all pixels in a hyperspectral image of one patient from the test set are shown. The different colors represent different tissue types. The certainty of the classification, based on the probability, is shown by the intensity of the color. The more intense the color is, the higher the certainty of the classifier is for this classification.
Fig. 4

Classification of the tissue samples of one patient from the test set of the combined data set of the visual and near-infrared camera. In the first column, the RGB image of each tissue sample is shown. The second column shows the registered annotated pathology image (yellow = fat, green = muscle or healthy colorectal wall, red = tumor, blue = mucosa). In the third column, the classification based on the visual and near-infrared spectra is shown projected on the binary mask of the RGB image (yellow = fat, green = muscle of healthy colorectal wall, red = tumor). The first row shows the healthy tissue including fat and healthy colorectal wall. The second row shows the tumor tissue sample. Tissue annotated by the pathologist as mucosa (blue) is not classified and is shown a white in the third column.

Classification of the tissue samples of one patient from the test set of the combined data set of the visual and near-infrared camera. In the first column, the RGB image of each tissue sample is shown. The second column shows the registered annotated pathology image (yellow = fat, green = muscle or healthy colorectal wall, red = tumor, blue = mucosa). In the third column, the classification based on the visual and near-infrared spectra is shown projected on the binary mask of the RGB image (yellow = fat, green = muscle of healthy colorectal wall, red = tumor). The first row shows the healthy tissue including fat and healthy colorectal wall. The second row shows the tumor tissue sample. Tissue annotated by the pathologist as mucosa (blue) is not classified and is shown a white in the third column.

Classification with a Single Camera

The classification of fat, healthy colon or rectal wall, and tumor was also performed on the datasets including only one of the two cameras. The dataset with only spectra from the near-infrared camera contained 54 patients and 4352 spectra of which 1690 were measured in fat, 1251 in the muscular layer of healthy colon or rectal wall, and 1411 in tumor. After removal of the outliers 4309 spectra remained (1676 fat, 1232 muscle, and 1401 tumor). For the training set, 41 patients were randomly selected with a total of 3241 spectra. The test set contained 13 patients and 1068 spectra. For the images of the visual camera, the same pixels were used as selected for the images of the near-infrared camera. A total of 2194 spectra, from 32 patients, were included in this dataset. From the spectra, 866 were measured in fat, 569 in muscle, and 759 in tumor. After removal of the outliers, 2164 spectra remained (854 fat, 560 muscle, and 750 tumor). The training set included 24 patients with 1723 spectra, and the remaining 8 patients were included in the test set, which contained 441 spectra. After spectral bands were extracted from the datasets, the quadratic classifiers were trained and ROC curves for both datasets were obtained. In Fig. 5, the ROC curves of both classifiers are shown together with the ROC curve of the classifier created with the combined dataset. This shows a slightly worse performance for the dataset with only visual camera images compared to the dataset with only near-infrared images or the dataset including image of both cameras. This was also seen in the performance measures, with an MCC of 0.90 for the dataset with only visual camera images, and an MCC of 0.99 and 1.00 for the dataset with only near-infrared camera images and the combined dataset, respectively. All other performance measures showed the same trend.
Fig. 5

ROC curves of the training results of the quadratic classifier distinguishing fat from all other tissue types. The three datasets are shown as the visual camera (green), near-infrared camera (red), and the combination of the visual and near-infrared camera (blue).

ROC curves of the training results of the quadratic classifier distinguishing fat from all other tissue types. The three datasets are shown as the visual camera (green), near-infrared camera (red), and the combination of the visual and near-infrared camera (blue). In Fig. 6, the ROC curves of the SVMs are shown for the three training datasets. Here, again, the dataset containing only visual camera images showed the worst performance. However, there is a clear difference between the dataset containing only near-infrared camera images and the dataset containing images of both cameras, where the latter outperformed the first. A summary of the performance measures of the SVM is shown in Table 3. The same trend is shown for all performance of the three datasets; the dataset including only visual camera images performed worst and the combined dataset performed best.
Fig. 6

ROC curves of the training results of the SVMs distinguishing tumor from healthy colorectal tissue. The three datasets are shown as the visual camera (green), near-infrared camera (red), and the combination of the visual and near-infrared camera (blue).

Table 3

Performance measures for the SVM from the three training datasets.

Performance measureVisual cameraNear-infrared cameraCombined dataset
MCC0.500.590.83
AUC0.810.870.98
Accuracy0.740.800.91
Sensitivity0.770.780.93
Specificity0.740.810.90
ROC curves of the training results of the SVMs distinguishing tumor from healthy colorectal tissue. The three datasets are shown as the visual camera (green), near-infrared camera (red), and the combination of the visual and near-infrared camera (blue). Performance measures for the SVM from the three training datasets. The results of the test set of the two datasets containing data of only one of the two cameras resulted in an accuracy for determining the tissue types of 0.67 () and 0.83 () for the visual camera data and near-infrared camera data, respectively. The accuracy calculated per patient and averaged was 0.71 () for the visual camera data and 0.83 () for the near-infrared camera data. The classifiers created were also used to classify the spectra from each pixel of entire hyperspectral images of one of the patients from the test set that was imaged by both cameras. In Fig. 7, the result of this classification is shown for all three classifications.
Fig. 7

Classification of the tissue samples of one patient from the test dataset of the combined dataset. In the first column, the RGB image of each tissue sample is shown. The second column shows the registered annotated pathology H&E image (yellow = fat, green = muscle or healthy colorectal wall, red = tumor, blue = mucosa). In the third to fifth column from the classification based on the visual image only, the classification based on the near-infrared image only and the classification based on the combined visual and near-infrared image are shown, respectively, projected on the binary mask of the RGB image (yellow = fat, green = muscle of healthy colorectal wall, red = tumor). From top to bottom, fat, healthy colorectal wall, and tumor tissue are shown. Tissue annotated as mucosa (blue) by the pathologist is not classified and shown as white in the third to fifth column.

Classification of the tissue samples of one patient from the test dataset of the combined dataset. In the first column, the RGB image of each tissue sample is shown. The second column shows the registered annotated pathology H&E image (yellow = fat, green = muscle or healthy colorectal wall, red = tumor, blue = mucosa). In the third to fifth column from the classification based on the visual image only, the classification based on the near-infrared image only and the classification based on the combined visual and near-infrared image are shown, respectively, projected on the binary mask of the RGB image (yellow = fat, green = muscle of healthy colorectal wall, red = tumor). From top to bottom, fat, healthy colorectal wall, and tumor tissue are shown. Tissue annotated as mucosa (blue) by the pathologist is not classified and shown as white in the third to fifth column.

Discussion

In this study, the potential added value of HSI for fast tissue classification during colorectal cancer surgery was examined. As a first step, an ex vivo study was designed in which tissue samples from colorectal cancer surgery were imaged with two hyperspectral cameras. One camera obtained images in the visual wavelength range (400 to 1000 nm), and the second camera obtained images in the near-infrared wavelength range (900 to 1700 nm). HSI allowed accurate discrimination of fat, healthy colon or rectal wall, and tumor tissue, with an accuracy of 0.88 () for the combination of the visual and near-infrared camera images. Current literature of HSI in colorectal cancer was mainly focused on the classification of H&E pathology slides or classification of tissue during endoscopy. For the first application, hyperspectral images were made of H&E pathology slides to obtain objective classification into healthy or malignant tissue of colon biopsies. This application is far from the goal of the current study as the current study is focused on near real-time imaging during surgery. For the second application, a hyperspectral camera was combined with an endoscope, to obtain hyperspectral images during endoscopy. The study by Kumashiro et al. obtained in vivo hyperspectral data during colonoscopy and was able to distinguish tumor from healthy mucosa with a sensitivity of 0.73 and a specificity of 0.82, using Pearson correlation analysis. The study by Han et al. obtained better results with an accuracy of 0.94 and a sensitivity and specificity of 0.97 and 0.91, respectively. This study used hyperspectral images in the spectral range from 405 to 665 nm, so only the visual wavelength range. These results are better than the results shown in the current study for the dataset including only the visual camera images. The main difference between the two studies and the current study is the location of the measurements. Han et al. and Kumashiro et al. performed measurements of the lumen of the colon during endoscopy, whereas the current study focused on the surgical application and performed measurements from the surface of the colon. In a previous study, we showed the possibility to distinguish the three tissue types—fat, healthy colon or rectal wall, and tumor—using fiberoptic diffuse reflectance spectroscopy (DRS) in a surgical setting. The information obtained in fiberoptic DRS is very similar to the information obtained with a hyperspectral camera system. Therefore, it is not surprising that the results obtained in the current study are comparable to the results obtained in the previous study using DRS. However, the accuracy of the current study is slightly less compared to the accuracy obtained with the DRS measurements (, ). An explanation for this difference might be due to the correlation with the gold standard, histology, which is less challenging for the fiberoptic point measurements performed in the previous study. In the testing of the combination of the two cameras, 30 muscle spectra of one particular patient were classified as fat. These misclassifications are most likely due to a fault in the registration between the hyperspectral images and the histology in this specific patient. The influence of this one patient can be seen in the accuracy determined per patient, which is 0.93 and similar to the accuracy obtained in the DRS study. For the evaluation of the performance of the classifiers, six different performance measures were used. Of these parameters, the MCC and AUC are most accurate in the current study, because these parameters are relatively insensitive to the effect of an imbalanced dataset. However, for the combination of the quadratic classifier and the SVM, the MCC and AUC cannot be used, because both measures only account for a two class problem. For the quadratic classifier, no large differences are shown between the performance measures for the three datasets. However, the performance measures of the SVM did show a difference. The combination of the two cameras clearly outperforms the datasets using only one of the two cameras (Table 3). For the combination of the quadratic classifier and the SVM, the accuracies can be compared. Here, the combination of the two cameras outperforms the datasets with data from only one of the two cameras. Comparing the two cameras, the near-infrared camera slightly outperforms the visual camera, with a MCC of 0.59 and 0.50, respectively. In Fig. 3, only the average curves are shown per tissue type. Although the difference between the average spectra of colon and tumor tissue is bigger in the visual part of the spectrum, also the standard deviation (not shown) is higher in the visual part compared to the near-infrared part. Furthermore, the SVM used for the classification of healthy colon and tumor does not take into account each feature individually but uses the combination of features to find the optimal hyperplane, which differentiates the two classes. In accordance, the combination of the features from the near-infrared part of the spectrum might give a better result than the combination of the features from the visual part of the spectrum. It is hard to visualize these distinctive differences in the near-infrared part of the spectrum. In line with the explanation above, as mentioned before, the MCC value for the discrimination between healthy and tumor was slightly higher in the near-infrared part of the spectrum, the combination of the near-infrared and visual part of the spectrum gives the best results because of the combination of the features that can be made. This increases the accuracy with 0.21 to 0.88. For the translation of the technique to an in vivo setting, where it will be used during surgery, the large dependence on the near-infrared wavelength ranges is favorable. The main difference between an ex vivo setting and an in vivo setting is the presence of blood. For oxygenated and deoxygenated blood, the main absorption bands are located in the visual wavelength range. Therefore, blood absorption will have no influence in the near-infrared wavelength range. The influence of blood in the translation to an in vivo setting using a classification method based mostly on the near-infrared wavelength range will thus be small. Previous work in our group showed good results in combining a quadratic classifier and a linear SVM in tissue identification in colorectal cancer using fiberoptic DRS. In the current study also, two different classifiers to distinguish between fat, healthy colorectal wall, and tumor are used. First, a quadratic classifier was used to classify fat; second, a linear SVM was used to distinguish healthy colorectal wall from tumor. Because fat was first classified with the quadratic classifier, a binary task was left for the SVM. Therefore, a simple linear SVM could be used to distinguish healthy colorectal wall from tumor tissue. To perform a classification of three tissue types using only SVMs, a one-against-one or one-against-all classification should be performed. This will result in a combination of at least three SVMs, which will be a more complex classifier compared to the combination currently used. The more complex the classifier is, the more prone the classifier is to overfitting. Because the classification of fat was easy to perform, a more simple approach could be used in the form of a two-step classification, combing a quadratic classifier with a single SVM. The classifiers created in this study were only based on the spectral features. However, because 2-D images were obtained with the hyperspectral cameras, there is also an option to use spatial and textural properties of the images to classify the pixels. In the current study, this option was not used because the spatial and textural properties during surgery will be very different compared to the properties, which would have been obtained in this study. However, for future studies, textural properties may be taken into account and could be used to further improve the current classification results. In this ex-vivo study, the pathologist cut cross-section slices of the tumor and colorectal wall provide a large surface area of tumor and healthy tissue. This method was chosen to obtain a sufficient amount of data to create a reliable classification. In a surgical setting, these large surfaces of tumor will not be seen. Rectal tumors start developing in the mucosa and grow through the muscle layer into the surrounding mesorectal fat when becoming more advanced. In contrast to the large volume of tumor present in the lumen and wall of the rectum, smaller volumes of the tumor will be present in the mesorectal fat and possibly in the resection surface created by the surgeon. So, the main question for future research should be whether the current classification will still be able to detect an area of tumor tissue, which is much smaller compared to the cross-section slices and mainly surrounded by healthy tissue. So, as a next step toward in vivo use, the entire resected specimens should be imaged with HSI to validate the current accuracy in a more realistic setting. To be able to perform HSI during surgery, some technical changes need to be made. The currently used set-up is a push-broom camera, where the samples are scanned by moving through the imaging line of the camera. In an in vivo setting, especially during laparoscopic surgery, this will not be possible. Therefore, a snapshot multispectral camera should be used, which can be attached to a laparoscopic system. In a multispectral camera, a limited number of wavelengths can be measured. These wavelengths should be chosen based on previous research. Therefore, further research should be performed on the selection of the most important wavelengths to distinguish tumor from healthy surrounding tissue. Using a snapshot camera will reduce the data acquisition time compared to the current set-up, from 1 min to 1 s. Moreover, the preprocessing time of the data will decrease because of the limited number of wavelengths acquired. Therefore, when the current set-up is transformed into a set-up that can be used in vivo, real-time tissue classification will be possible. Furthermore, in the in vivo setting, the measurements will be less controlled compared to the current study. For example, the illumination of the tissue will be variable during surgery. Moreover, specular reflection and glare will be present. These issues should be taken into consideration before starting an in vivo study. Finally, when performing measurements in vivo, a real-time classification should be available. The current classification method would allow such real-time use.

Conclusion

In this ex vivo study, fat, healthy colorectal wall, and tumor tissue could be distinguished using HSI with an accuracy of 0.88 (). When the accuracy is determined per patient, a mean accuracy of 0.93 () was obtained. Two hyperspectral cameras were used, one in the visual wavelength range and one in the near-infrared wavelength range. Using only one of the two cameras decreased the accuracy for the visual and near-infrared camera. The results of this study show the potential of using HSI during colorectal surgery to increase the number of radical resections. Future research should be focused on imaging of entire specimen and the translation of the technique to an intraoperative setting. This should result in a technique that provides accurate real-time tissue classification during laparoscopic colorectal cancer surgery.
  20 in total

1.  In vivo use of hyperspectral imaging to develop a noncontact endoscopic diagnosis support system for malignant colorectal tumors.

Authors:  Zhimin Han; Aoyu Zhang; Xiguang Wang; Zongxiao Sun; May D Wang; Tianyu Xie
Journal:  J Biomed Opt       Date:  2016-01       Impact factor: 3.170

2.  The role of tactile feedback in laparoscopic surgery.

Authors:  Maria V Ottermo; Marit Ovstedal; Thomas Langø; Oyvind Stavdahl; Yunus Yavuz; Tor A Johansen; Ronald Mårvik
Journal:  Surg Laparosc Endosc Percutan Tech       Date:  2006-12       Impact factor: 1.719

3.  Multispectral digital colposcopy for in vivo detection of cervical cancer.

Authors:  Juan Benavides; Sung Chang; Sun Park; Rebecca Richards-Kortum; Nick Mackinnon; Calum Macaulay; Andrea Milbourne; Anais Malpica; Michele Follen
Journal:  Opt Express       Date:  2003-05-19       Impact factor: 3.894

Review 4.  Chromophore based analyses of steady-state diffuse reflectance spectroscopy: current status and perspectives for clinical adoption.

Authors:  Torre M Bydlon; Rami Nachabé; Nimmi Ramanujam; Henricus J C M Sterenborg; Benno H W Hendriks
Journal:  J Biophotonics       Date:  2014-04-23       Impact factor: 3.207

Review 5.  Medical hyperspectral imaging: a review.

Authors:  Guolan Lu; Baowei Fei
Journal:  J Biomed Opt       Date:  2014-01       Impact factor: 3.170

6.  Model based inversion for deriving maps of histological parameters characteristic of cancer from ex-vivo multispectral images of the colon.

Authors:  Ela Claridge; Džena Hidović-Rowe
Journal:  IEEE Trans Med Imaging       Date:  2013-11-12       Impact factor: 10.048

7.  Diffuse reflectance spectroscopy as a tool for real-time tissue assessment during colorectal cancer surgery.

Authors:  Elisabeth J M Baltussen; Petur Snaebjornsson; Susan G Brouwer de Koning; Henricus J C M Sterenborg; Arend G J Aalbers; Niels Kok; Geerard L Beets; Benno H W Hendriks; Koert F D Kuhlmann; Theo J M Ruers
Journal:  J Biomed Opt       Date:  2017-10       Impact factor: 3.170

8.  Cancer incidence and mortality worldwide: sources, methods and major patterns in GLOBOCAN 2012.

Authors:  Jacques Ferlay; Isabelle Soerjomataram; Rajesh Dikshit; Sultan Eser; Colin Mathers; Marise Rebelo; Donald Maxwell Parkin; David Forman; Freddie Bray
Journal:  Int J Cancer       Date:  2014-10-09       Impact factor: 7.396

9.  Label-free reflectance hyperspectral imaging for tumor margin assessment: a pilot study on surgical specimens of cancer patients.

Authors:  Baowei Fei; Guolan Lu; Xu Wang; Hongzheng Zhang; James V Little; Mihir R Patel; Christopher C Griffith; Mark W El-Diery; Amy Y Chen
Journal:  J Biomed Opt       Date:  2017-08       Impact factor: 3.170

10.  Utility of multispectral imaging for nuclear classification of routine clinical histopathology imagery.

Authors:  Laura E Boucheron; Zhiqiang Bi; Neal R Harvey; Bs Manjunath; David L Rimm
Journal:  BMC Cell Biol       Date:  2007-07-10       Impact factor: 4.241

View more
  26 in total

1.  Tissue classification of oncologic esophageal resectates based on hyperspectral data.

Authors:  Marianne Maktabi; Hannes Köhler; Margarita Ivanova; Boris Jansen-Winkeln; Jonathan Takoh; Stefan Niebisch; Sebastian M Rabe; Thomas Neumuth; Ines Gockel; Claire Chalopin
Journal:  Int J Comput Assist Radiol Surg       Date:  2019-06-20       Impact factor: 2.924

2.  [Possibilities and perspectives of hyperspectral imaging in visceral surgery].

Authors:  I Gockel; B Jansen-Winkeln; N Holfert; N Rayes; R Thieme; M Maktabi; R Sucher; D Seehofer; M Barberio; M Diana; S M Rabe; M Mehdorn; Y Moulla; S Niebisch; D Branzan; K Rehmet; J P Takoh; T-O Petersen; T Neumuth; A Melzer; C Chalopin; H Köhler
Journal:  Chirurg       Date:  2020-02       Impact factor: 0.955

3.  Excitation-scanning hyperspectral video endoscopy: enhancing the light at the end of the tunnel.

Authors:  Craig M Browning; Joshua Deal; Sam Mayes; Arslan Arshad; Thomas C Rich; Silas J Leavesley
Journal:  Biomed Opt Express       Date:  2020-12-10       Impact factor: 3.732

4.  Optimizing algorithm development for tissue classification in colorectal cancer based on diffuse reflectance spectra.

Authors:  Elisabeth J M Baltussen; Henricus J C M Sterenborg; Theo J M Ruers; Behdad Dashtbozorg
Journal:  Biomed Opt Express       Date:  2019-11-05       Impact factor: 3.732

Review 5.  Intraoperative imaging in pathology-assisted surgery.

Authors:  Floris J Voskuil; Jasper Vonk; Bert van der Vegt; Schelto Kruijff; Vasilis Ntziachristos; Pieter J van der Zaag; Max J H Witjes; Gooitzen M van Dam
Journal:  Nat Biomed Eng       Date:  2021-11-08       Impact factor: 25.671

6.  Automatic optical biopsy for colorectal cancer using hyperspectral imaging and artificial neural networks.

Authors:  Toby Collins; Valentin Bencteux; Sara Benedicenti; Valentina Moretti; Maria Teresa Mita; Vittoria Barbieri; Francesco Rubichi; Amedeo Altamura; Gloria Giaracuni; Jacques Marescaux; Alex Hostettler; Michele Diana; Massimo Giuseppe Viola; Manuel Barberio
Journal:  Surg Endosc       Date:  2022-08-25       Impact factor: 3.453

7.  Microscopy is better in color: development of a streamlined spectral light path for real-time multiplex fluorescence microscopy.

Authors:  Craig M Browning; Samantha Mayes; Samuel A Mayes; Thomas C Rich; Silas J Leavesley
Journal:  Biomed Opt Express       Date:  2022-06-07       Impact factor: 3.562

8.  Tumor detection of the thyroid and salivary glands using hyperspectral imaging and deep learning.

Authors:  Martin Halicek; James D Dormer; James V Little; Amy Y Chen; Baowei Fei
Journal:  Biomed Opt Express       Date:  2020-02-18       Impact factor: 3.732

9.  Computer Vision in the Operating Room: Opportunities and Caveats.

Authors:  Lauren R Kennedy-Metz; Pietro Mascagni; Antonio Torralba; Roger D Dias; Pietro Perona; Julie A Shah; Nicolas Padoy; Marco A Zenati
Journal:  IEEE Trans Med Robot Bionics       Date:  2020-11-24

10.  Gastric cancer diagnosis using hyperspectral imaging with principal component analysis and spectral angle mapper.

Authors:  Ningliang Liu; Yaxiong Guo; Houmin Jiang; Weisong Yi
Journal:  J Biomed Opt       Date:  2020-06       Impact factor: 3.170

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.