Literature DB >> 35720703

Medical Image Registration Algorithm Based on Bounded Generalized Gaussian Mixture Model.

Jingkun Wang1, Kun Xiang2, Kuo Chen3, Rui Liu2, Ruifeng Ni2, Hao Zhu2, Yan Xiong1.   

Abstract

In this paper, a method for medical image registration based on the bounded generalized Gaussian mixture model is proposed. The bounded generalized Gaussian mixture model is used to approach the joint intensity of source medical images. The mixture model is formulated based on a maximum likelihood framework, and is solved by an expectation-maximization algorithm. The registration performance of the proposed approach on different medical images is verified through extensive computer simulations. Empirical findings confirm that the proposed approach is significantly better than other conventional ones.
Copyright © 2022 Wang, Xiang, Chen, Liu, Ni, Zhu and Xiong.

Entities:  

Keywords:  Gaussian mixture model; bounded generalized Gaussian mixture model; gray-level-based registration; medical image registration; multimodal

Year:  2022        PMID: 35720703      PMCID: PMC9201218          DOI: 10.3389/fnins.2022.911957

Source DB:  PubMed          Journal:  Front Neurosci        ISSN: 1662-453X            Impact factor:   5.152


Introduction

Image registration is an essential part of computer vision and image processing (Visser et al., 2020), which is widely used in medical image analysis and intelligent vehicles (Zhu et al., 2013, 2017, 2021a,2021b,2022). Medical image analysis is the basis for judging the patient’s condition in future intelligent diagnosis and treatment or auxiliary diagnosis and treatment (Weissler et al., 2015; Yang et al., 2018). More importantly, image registration sets the stage for subsequent image segmentation and fusion (Saygili et al., 2015; Zhu et al., 2019). Current clinical practice typically involves printing images onto radiographic film and viewing them on a lightbox. The computerized approach offers potential benefits, particularly by accurately aligning the information in different images and providing tools to visualize the composite image. A key stage in this process is the alignment or registration of the images (Hill et al., 2001). The premise of image registration is that there is a same logical part between the reference image and the floating image (Gholipour et al., 2007; Reaungamornrat et al., 2016). Image registration realizes transformation by determining the space coordinate transformation between two image pixels, which enables the corresponding region on the reference image to coincide with the floating image in space (Zhang et al., 2019). This means that the same anatomical point on the human body has the same spatial position (the same position, angle and size) on two matched images (Gefen et al., 2007). There are two medical image registration methods: feature-based registration and gray-level-based registration (Sengupta et al., 2021). The feature-based registration method does not directly utilize the gray-level information of the image. It is based on abstracting the geometric features (such as corners, the center of the closed region, edges, contours, etc.) that remain unchanged in the image to be registered. The parameter values of the transformation model between the images to be registered are obtained by describing the features of the two images, respectively, and establishing the matching relationship (Huang, 2015). The image registration based on this feature has advantages of less computation and faster registration speed, and it is robust to changes of gray image scale. However, its registration accuracy is usually not as high as that of gray-level-based image registration (Li et al., 2020; Ran and Xu, 2020). In the gray-level-based medical image registration method, a similarity measure function between images is established through the gray information of the entire image (Yan et al., 2020). The transformation model parameters between images are obtained by maximizing and minimizing the value of the similarity measure function (Zhang et al., 2019). The gray-level-based image registration algorithm uses all the gray information of the image in the registration process. Therefore, the precision and robustness of the obtained transformation model are higher than the feature-based image registration (Frakes et al., 2008). The commonly used gray-level-based image registration methods are sequential similarity detection algorithm (SSDA), cross-correlation, mutual information, and phase correlation (Gupta et al., 2021). Based on the traditional algorithms, Yan et al. (2010) extracted a fast and effective algorithm, SSDA. Anuta (1970) proposed an image registration technique using Fourier transform for cross-correlation image detection and calculation to improve speed performance of registration. Evangelidis and Psarakis (2008) offered a modified version of the correlation coefficient as a performance criterion for image approval. Zheng et al. (2011) proposed a cross-correlation registration algorithm based on image rotation projection to avoid rotation and interpolation steps in image registration, reducing data dimension and computational complexity. For image registration using mutual information as a similarity measure, Pluim et al. (2000) combined image gray level with spatial image information and added image gradient into the algorithm, which successfully solved the problem of finding the global optimal solution in the registration process. A direct image registration method using mutual information (MI) as an alignment metric was proposed by Dame and Marchand (2012). A set of two-dimensional motion parameters can be estimated accurately in real time by optimizing the maximum mutual information. Lu et al. (2008) proposed a new joint histogram estimation method, which utilizes Hanning’s windowed since approximation function as a kernel function of partial volume interpolation. Orchard and Mann (2009) utilized the maximum likelihood clustering method of the joint strength scatter chart. The expected probability of the cluster is modeled as a Gaussian mixture model (GMM), and the expectation-maximization (EM) method is utilized for achieving solution in iterative algorithm. Sotiras et al. (2013) emphasized the technology applied to medical images and systematically presented the latest technology. The paper provided an extensive account of registration techniques in a systematic manner. Pluim et al. (2004) compared the performance of mutual information as a registration measure with that of other f-information measures. An important finding is that several measures can potentially yield significantly more accurate results than mutual information. Klein et al. (2007) compared the performance of eight non-rigid registration optimization methods of medical images. The results show that the Robbins–Monro method is the best choice in most applications. With this approach, the computation time per iteration can be lowered approximately 500 times without affecting the rate of convergence. However, the distribution range of GMM is (−∞, + ∞), and so the method could not process the target information in a fixed area. In the field of computer vision, image pixel values are distributed over a limited area of [0, 255]. Therefore, the bounded generalized Gaussian mixture model (BGGMM) is used to model the image (Nguyen et al., 2014), which can more thoroughly describe the joint intensity vector distribution of the image pixels and highlight the details of the image. The BGGMM has good robustness at the same time. Therefore, based on the BGGMM, this paper models both single-modality and multimodal image registration and then solves the model under the framework of maximum likelihood estimation (Zhu and Cochoff, 2002). Experimental verification results on a large number of image data sets show that compared with the existing gray-level-based medical image registration algorithm based, the image registration accuracy of the proposed method is improved.

Problem Formulation

Suppose that two different medical images are registered, one medical image represents the reference image, denoted by A, and the other represents the floating image, denoted by B. These two different medical images come from different sensors. Therefore, each pixel position x in the space of two medical images corresponds to a pixel value, and we use the joint intensity vector to represent the intensity value of the two images at the position. Here, I can be expressed as: Among them, A and B, respectively, represent the pixel value of the reference image and the floating image at the pixel position x. In order to realize the registration of two images, it is necessary to assign N registration parameters to each image to describe the spatial transformation of the image. θ can represent the set of all registration parameters. Then, the joint intensity vector of the registration image after employing registration parameters can be re-expressed as . The bounded generalized Gaussian mixture model (BGGMM) is used to describe the distribution of the joint intensity. The probability distribution of the joint strength vector is: Where ρ={u,σ,Λ,τ}is the model parameters, M represents the number of bounded generalized Gaussian (BGG) distribution components in the mixture model, u, σ and Λ, respectively, represent the mean, covariance, and shape parameters of the m-th BGG distribution component. τrepresents the weight of the distribution component in the mixture model and satisfies the condition τ≥0 and . BG(.) represents a BGG distribution, i.e., Which ∂ represents a bounded support area, and the distribution is written as and Where Γ(⋅) is the gamma function. Therefore, X represents the number of pixels, and the log-likelihood function of image registration is: In the framework of maximum likelihood, the hidden variable z that is introduced to the model indicates the category of the cluster that belongs to, that is, it belongs to the m-th (BGG) distribution component. Therefore, the log-likelihood function of the model can be written as:

Parameters Estimation

Density Estimation

According to the above model, the EM algorithm is used to estimate various parameters involved in the model. The EM algorithm is mainly divided into two steps, step E and step M. Step E: Step M: Here t represents the t-th iteration. The final model parameters can be determined by iterating these two steps. In step E, the probability that belonging to the m-th cluster is given: Where . Using the posterior distribution η(z) and the current parameters ρ( At step M, the parameters at the time (t+1) are updated by the maximizing equation (10). The results are as follows: Where R represents: In formula (12), when x ≥ 0, sign(x) is equal to 1, otherwise it is equal to 0. represents the random variable in the probability distribution , o is the number of random variables S. Note that O is a large integer, and O = 106 is taken in this paper. Where Gm represents: Under the condition that other parameters are fixed, use the Newton-Raphson method to estimate Λ. Each iteration needs to solve the first and second derivatives of Q(ρ,ρ) with respect to parameter Λ. The next iteration value of Λ can be expressed as: Where ϑ is the scale factor, and the derivative of Q(ρ,ρ) with respect to Λ is given by: Where: The second derivative of Q(ρ,ρ) with respect to Λ is: Where, Finally, update the estimate of the prior probability that can be expressed as:

Motion Parameters Estimation

Optimize the corresponding parameter θ by deriving the result of Q(ρ,ρ) to θ as 0: In order to find the appropriate model movement parameter θ to satisfy the equation (23), introduce a small movement increment and replace θ with as the estimated parameter. The following is obtained by using approximate linear space transformation: Incorporate formula (23) into formula (24) and the following can be obtained: The optimization of the registration parameters can be achieved by solving the movement increment in equation (25).

Implementation

In summary, the proposed image registration algorithm based on the BGGMM is shown in Algorithm 1 and Figure 1. This paper regards M BGG distribution components in the joint intensity scatter plot of the registered image as M clusters, uses the k-mean method to find the cluster centers and compares parameter initialization of the BGGMM model. This paper initializes Λ = 2. Secondly, this paper also utilizes multi-resolution image registration, and the resolutions are set [0.1 0.2 1], respectively. The image is first registered at low resolution and then high resolution, and the registration result at each resolution can be used as the next resolution registration. Therefore, the calculation time can be reduced, and the algorithm convergence can be accelerated in the iterative process of the proposed algorithm.
Algorithm 1

Description of algorithm for medical image registration based on BGGMM.

FIGURE 1

Flowchart of medical image registration.

Description of algorithm for medical image registration based on BGGMM. Flowchart of medical image registration. The EM algorithm is first used to estimate the BGGMM model parameter ρ on the joint intensity scatter plot. After the optimal BGGMM model parameter ρ is estimated for T1 times, the motion adjustment is performed. This paper introduces a small movement increment and iterates T2 times to update the motion parameters, ensuring the optimal parameters are obtained. Finally, iterate repeatedly until convergence to achieve image registration.

Experiment

The computer environment of experiments in this paper is Intel(R) Core (TM) i5-7300HQ CPU @ 2.50 GHz with 8 GB RAM, while the operating system is 64-bit Windows 10.0. All simulations are implemented using MATLAB R2020b. The mutual information method (MI) (Lu et al., 2008), the enhanced correlation coefficient (ECC) (Evangelidis and Psarakis, 2008) and the ensemble registration approach (ER) (Orchard and Mann, 2009) are compared to evaluate the performance of the proposed method. The average pixel displacement (PAD) (Li et al., 2016) is used as a registration error to objectively measure the performance of different approaches. In the successful registration case, the value of the PAD is zero. The larger the PAD, the more significant deviation and the lower registration accuracy. If PAD is greater than 3, the registration is considered to have failed. MURA (Rajpurkar et al., 2017) and Altas (Yu and Zheng, 2016) public image data sets are used to verify the performance of these methods. Details about two image datasets and experiments are reported, as shown in Table 1, where the bold values indicate the best results. The t-test is used to test the significance of the difference between the PAD results of the BGGMM method and the other three registration methods in image registration on public data sets. P < 0.05 means the difference is statistically significant, and the comparison results are summarized in Table 2. Both in the MURA and Atlas data sets, the PAD results of the BGGMM method were minor, and the differences were statistically significant compared to the PAD results of the ECC and ER methods (P < 0.05). In the MURA data set, the difference between the PAD results of the BGGMM method and the MI method was not statistically different (P > 0.05). However, in the Atlas dataset, the PAD results of the BGGMM method were smaller than those of the MI method, and the difference was statistically significant (P < 0.05).
TABLE 1

The pad results of image registration on public data sets.

Method/datasetPublic dataset
MURA imagesAtlas images
MI1.91620.7168
ECC8.149410.7606
ER6.51829.1342
Proposed method 0.2271 0.6801

The bold values indicated the best results.

TABLE 2

The t-test results of the pad results of BGGMM versus other image registration methods on public data sets.

DatabaseMethodp-value
MURABGGMMMI0.132
ECC0.000
ER0.000
AtlasBGGMMMI0.034
ECC0.001
ER0.000
The pad results of image registration on public data sets. The bold values indicated the best results. The t-test results of the pad results of BGGMM versus other image registration methods on public data sets.

Musculoskeletal Radiographs Dataset

The proposed approach is tested on an ensemble of MURA images. The test set is from the Large Dataset for Abnormality Detection in Musculoskeletal Radiographs (MURA) project’s training data set. One slice of this dataset is depicted in Figure 2. The initial image to be registered is generated by random translation and rotation transformation, and the pixel and angle transformation parameters ranges are [–20, 20] and [–10, 10], respectively. This paper sets M = 6, that is, the number of BGG distribution components in the initial model is 6. The MURA dataset included 12,173 patients, 14,863 studies, and 40,561 multi-view radiographs. Each study belonged to one of the seven standard upper limb radiology study types: fingers, elbows, forearms, hands, humerus, shoulders, and wrists. Each study was manually marked as normal or abnormal by the radiologist.
FIGURE 2

One slice of the MURA dataset. (A) Finger, (B) Hand, (C) Forearm, and (D) Shoulder.

One slice of the MURA dataset. (A) Finger, (B) Hand, (C) Forearm, and (D) Shoulder. The PAD values of the MURA dataset are summarized in the first column of Table 1. The average registration error of the proposed BGGMM method is significantly lower than other methods. The BGGMM method is more advantageous in edge retention and information content of source images. The registration results of the four methods are shown in Figure 3, which register the source image and transform the image with rotation and translation. In these four methods, registration is performed to the source image, and rotation, translation and transformation is performed to the image. Figure 3A shows the source image and the image to be registered.
FIGURE 3

The registration results of four methods in four skeleton images of MURA dataset. (A) Initialization. (B) BGGMM. (C) ECC. (D) (ER). (E) MI.

The registration results of four methods in four skeleton images of MURA dataset. (A) Initialization. (B) BGGMM. (C) ECC. (D) (ER). (E) MI. With different noise levels, Gaussian noise is used as the independent variable in finger images of the experiment, and the noise level increases incrementally to test the performance of BGGMM. The mean value of Gaussian noise is 0, and the variance ranges from 0 to 0.04. As shown in Figure 4A, the excellent registration performance of several comparison algorithms can be observed. Among them, the registration error of the ER algorithm is the largest. The registration error of the BGGMM algorithm is lower than other methods under different noise levels.
FIGURE 4

PAD of different methods under different noise levels and different displacements in MURA dataset. (A) PAD under different noise levels on Finger image. (B) PAD under different displacement on Finger image. (C) PAD under different noise levels on Hand image. (D) PAD under different displacement on Hand image. (E) PAD under different noise levels on Forearm image. (F) PAD under different displacement on Forearm image. (G) PAD under different noise levels on Shoulder image. (H) PAD under different displacement on Shoulder image.

PAD of different methods under different noise levels and different displacements in MURA dataset. (A) PAD under different noise levels on Finger image. (B) PAD under different displacement on Finger image. (C) PAD under different noise levels on Hand image. (D) PAD under different displacement on Hand image. (E) PAD under different noise levels on Forearm image. (F) PAD under different displacement on Forearm image. (G) PAD under different noise levels on Shoulder image. (H) PAD under different displacement on Shoulder image. The registration performance of the algorithm on Finger images is also tested under different displacement situations, as shown in Figure 4B. The displacement is added by moving the image t pixels horizontally and vertically, where the change range of t is 0–30, that is, the variation of the horizontal axis in Figure 4B. It is not difficult to see that the registration performance of this algorithm is better than other algorithms under different displacements. Among them, the ECC algorithm has poor anti-displacement interference, which is regarded as a registration failure. The ER algorithm has a good registration effect under the condition of small displacement. The BGGMM algorithm has the best performance when the change in displacement is large. Similarly, Figures 4C–H show the PAD value of different methods on Hand images, Forearm images, and Shoulder images under different noise levels and different displacements. The proposed method has the lowest registration error and the best registration performance.

Altas Dataset

Altas dataset is a multimodal dataset that includes more than 13,000 MRI and CT images of patients with brain diseases. Among them, MRI images have images with T1, T2, and PD weights. At the same time, it also includes the lesion images of patients with different lesion times. The image in which the MRI has T1, T2, and PD weights is selected, as shown in Figure 5. The initial image to be registered is generated by random translation and rotation transformation, and the pixel and angle transformation parameters ranges are [–20, 20] and [–10, 10], respectively. This paper sets M = 6, that is, the number of BGG distribution components in the initial model is 6.
FIGURE 5

Brain slice images from the Atlas dataset. (A) MR-T1, (B) MR-T2, (C) MR-PD.

Brain slice images from the Atlas dataset. (A) MR-T1, (B) MR-T2, (C) MR-PD. The PAD values of Altas dataset are summarized in the second column of Table 1. The average registration error of the proposed BGGMM method is significantly lower than other methods. The BGGMM method has an advantage in preserving the edge information of the source image. The registration results of the four methods are shown in Figure 6. In these four methods, two different modality images are used to register separately.
FIGURE 6

The registration results of four methods in the brain images of Altas dataset.

The registration results of four methods in the brain images of Altas dataset. The registration performance of BGGMM, ECC, and ER methods is tested under different Gaussian noises. According to the registration results in Figure 7A, the comparison of registration effects under different Gaussian noises can be obtained. The mean value of Gaussian noise is 0, and the variance ranges from 0 to 0.04. Among them, the registration error of the ECC algorithm is the largest. The PAD value of other algorithms mentioned above in this experiment is greater than 3, which is regarded as registration failures. The BGGMM algorithm has the lowest PAD value and has good registration performance.
FIGURE 7

PAD of BGGMM, ECC, ER, and MI methods under different noise levels and different displacements. (A) PAD of different methods under different noise levels. (B) PAD of different methods under different displacements.

PAD of BGGMM, ECC, ER, and MI methods under different noise levels and different displacements. (A) PAD of different methods under different noise levels. (B) PAD of different methods under different displacements. As shown in Figure 7B, the displacement is added by moving the image t pixels horizontally and vertically, where the change range of t is 0–30. When the displacement changes considerably, the error generated by the ER algorithm becomes larger and exceeds the effective range. As the change in displacement increases, the PAD value of our BGGMM algorithm is still unaffected, always maintaining a low level and performing better among the four algorithms.

Conclusion

A medical registration method based on a BGGMM is proposed in this paper. Firstly, a BGGMM is applied to model the joint intensity vector distribution of the medical image. The proposed approach then formulates the model as an ML framework and estimates the parameters of models applying an EM algorithm. The experimental results indicate that the proposed BGGMM significantly improves registration performances on medical images compared with benchmark methods. The effect of this method is more pronounced when dealing with source images with more interference information and larger offsets. In the future, the research on medical image fusion will be carried out based on BGGMM image registration, which will provide convenience for medical image analysis.

Data Availability Statement

The original contributions presented in this study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.

Author Contributions

YX and HZ conceived and designed the study. JW and KX conducted most of the experiments and data analysis and wrote the manuscript. KC, RL, and RN participated in collecting materials and assisting in drafting manuscripts. All authors reviewed and approved the manuscript.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
  19 in total

Review 1.  Medical image registration.

Authors:  D L Hill; P G Batchelor; M Holden; D J Hawkes
Journal:  Phys Med Biol       Date:  2001-03       Impact factor: 3.609

2.  F-information measures in medical image registration.

Authors:  Josien P W Pluim; J B Antoine Maintz; Max A Viergever
Journal:  IEEE Trans Med Imaging       Date:  2004-12       Impact factor: 10.048

3.  Likelihood maximization approach to image registration.

Authors:  Yang-Ming Zhu; Steven M Cochoff
Journal:  IEEE Trans Image Process       Date:  2002       Impact factor: 10.856

4.  Mutual information-based multimodal image registration using a novel joint histogram estimation.

Authors:  Xuesong Lu; Su Zhang; He Su; Yazhu Chen
Journal:  Comput Med Imaging Graph       Date:  2008-01-22       Impact factor: 4.790

5.  Parametric image alignment using enhanced correlation coefficient maximization.

Authors:  Georgios D Evangelidis; Emmanouil Z Psarakis
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2008-10       Impact factor: 6.226

6.  Second-order optimization of mutual information for real-time image registration.

Authors:  Amaury Dame; Eric Marchand
Journal:  IEEE Trans Image Process       Date:  2012-05-11       Impact factor: 10.856

Review 7.  Deformable medical image registration: a survey.

Authors:  Aristeidis Sotiras; Christos Davatzikos; Nikos Paragios
Journal:  IEEE Trans Med Imaging       Date:  2013-05-31       Impact factor: 10.048

8.  Registering a multisensor ensemble of images.

Authors:  Jeff Orchard; Richard Mann
Journal:  IEEE Trans Image Process       Date:  2009-12-28       Impact factor: 10.856

9.  A Digital Preclinical PET/MRI Insert and Initial Results.

Authors:  Bjoern Weissler; Pierre Gebhardt; Peter M Dueppenbecker; Jakob Wehner; David Schug; Christoph W Lerche; Benjamin Goldschmidt; Andre Salomon; Iris Verel; Edwin Heijman; Michael Perkuhn; Dirk Heberling; Rene M Botnar; Fabian Kiessling; Volkmar Schulz
Journal:  IEEE Trans Med Imaging       Date:  2015-04-29       Impact factor: 10.048

10.  Accurate MR Image Registration to Anatomical Reference Space for Diffuse Glioma.

Authors:  Martin Visser; Jan Petr; Domenique M J Müller; Roelant S Eijgelaar; Eef J Hendriks; Marnix Witte; Frederik Barkhof; Marcel van Herk; Henk J M M Mutsaerts; Hugo Vrenken; Jan C de Munck; Philip C De Witt Hamer
Journal:  Front Neurosci       Date:  2020-06-05       Impact factor: 4.677

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.