Literature DB >> 29184657

Hand-eye calibration using a target registration error model.

Elvis C S Chen1, Isabella Morgan2, Uditha Jayarathne1, Burton Ma3, Terry M Peters1.   

Abstract

Surgical cameras are prevalent in modern operating theatres and are often used as a surrogate for direct vision. Visualisation techniques (e.g. image fusion) made possible by tracking the camera require accurate hand-eye calibration between the camera and the tracking system. The authors introduce the concept of 'guided hand-eye calibration', where calibration measurements are facilitated by a target registration error (TRE) model. They formulate hand-eye calibration as a registration problem between homologous point-line pairs. For each measurement, the position of a monochromatic ball-tip stylus (a point) and its projection onto the image (a line) is recorded, and the TRE of the resulting calibration is predicted using a TRE model. The TRE model is then used to guide the placement of the calibration tool, so that the subsequent measurement minimises the predicted TRE. Assessing TRE after each measurement produces accurate calibration using a minimal number of measurements. As a proof of principle, they evaluated guided calibration using a webcam and an endoscopic camera. Their endoscopic camera results suggest that millimetre TRE is achievable when at least 15 measurements are acquired with the tracker sensor ∼80 cm away on the laparoscope handle for a target ∼20 cm away from the camera.

Entities:  

Keywords:  TRE model; calibration; calibration measurements; camera tracking; endoscopes; endoscopic camera; evaluated guided calibration; guided hand-eye calibration; homologous point-line pairs; image fusion; image registration; image sensors; laparoscope; medical image processing; modern operating theatres; monochromatic ball-tip stylus; surgery; surgical cameras; target registration error model; tracker sensor; visualisation techniques; webcam

Year:  2017        PMID: 29184657      PMCID: PMC5683221          DOI: 10.1049/htl.2017.0072

Source DB:  PubMed          Journal:  Healthc Technol Lett        ISSN: 2053-3713


Introduction

Surgical cameras such as laparoscopes, endoscopes, and pass-through head-mounted displays are prevalent in modern operating theatres and are often used as a surrogate for direct vision. The video images obtained from such cameras can be enhanced to show structures beneath the anatomical surfaces in the camera view by overlaying medical images such as computed tomography, magnetic resonance imaging, or ultrasound onto the images in an augmented reality environment. One way to implement the augmented reality environment is to track the camera using a three-dimensional (3D) tracking system; to perform the image overlay, the geometric relationship between the optical axis of the camera and the tracking target must be accurately known. Determining the geometric relationship between the optical axis of the camera and the tracking target is known as the hand–eye calibration problem. Once calibrated, advanced augmented reality system for computer-assisted interventions such as liver surgery [1, 2] and nephrectomy [3] can be implemented to assist in their surgical delivery. Hand–eye calibration for surgical cameras remains an active research topic [4], with particular requirements for accuracy, computational complexity, data selection [5], and in situ validation. Most approaches are similar to those described in the robotics literature [6], relying on imaging salient features of a stationary object from many different poses, then solving for rotation and translation separately [7], jointly [8], or iteratively [9, 10]. From a registration point of view [11], these approaches suffer from two drawbacks: namely the lack of an in situ accuracy assessment, as well as a well-defined data acquisition protocol to achieve accurate calibration. To address these issues, we introduce a novel concept of ‘guided calibration’, in which a registration error prediction model is incorporated to interactively guide the placement of a calibration tool with the registration error assessed for each successive measurement. For surgical interventions employing an external tracking system, the hand–eye calibration between the optical axis of the camera and the attached dynamic reference frame (DRF) can be formulated as a registration problem between homologous point–line pairs, where each point is the measured location of a tracked monochromatic ball and each corresponding line is the line of projection of the centre of the ball onto the image plane of the camera. Each point is measured in the coordinate frame of the DRF attached to the camera and each line is defined in the canonical coordinate frame of the camera, where the origin of the frame is the centre of projection and the z-axis of the frame is the optical axis of the camera. The mapping between the coordinate frame of the DRF and the canonical coordinate frame of the camera can be found by registering the points to their corresponding lines. Efficient [12] and globally optimal [13] algorithms exist that solve the point–line registration problem. Framing calibration as a registration problem makes it possible to use established models for predicting target registration error (TRE) to guide the collection of calibration data. TRE is defined as the distance between a target point r (not used to compute the registration transformation) and its corresponding point after the registration transformation has been applied [14]. TRE is dependent on the error in measuring the registration points. This error, originally named fiducial localisation error (FLE) for the point–point registration problem [14], is defined as the distance between the measured point and the unknown true location of the point before the registration transformation is computed. We use a point–line TRE model that was previously described for predicting TRE magnitude in ultrasound calibration [12, 15]. By incorporating such a TRE prediction model, the resulting TRE of the calibration can be assessed as soon as a new measurement is available. Furthermore, by searching the viewing frustum for an optimal fiducial placement and re-assessing the calibration using the TRE model, fiducial placement can be guided so as to minimise the predicted TRE. This guided calibration paradigm was tested on a surgical endoscope and millimetre accuracy at the centre of the camera frustum was achieved when the tracking DRF was attached to the handle of the laparoscope. We previously described how hand–eye calibration could be solved as a point–line registration problem [16] and we provided evidence that the calibration obtained via registration was as accurate or more accurate than several other existing calibration methods. In our previous work, we used a laparoscopic camera with a DRF attached to the tip of the laparoscope. The new contributions of this Letter are a method for guiding the collection of calibration data using a TRE estimation model and further evaluation of registration-based hand–eye calibration (both guided and unguided) via simulation with a large number of actual calibration data using a tracked laparoscopic camera with the DRF attached to the handle of the laparoscope.

Methods

As a proof of principle, we tested our calibration method using a C920 webcam (Logitech, Newark, CA, USA) and the single channel of a stereo laparoscope employed by the da Vinci® S surgical system (Intuitive Surgical, Inc., Sunnyvale, CA, USA). We used a webcam as an example of a camera with very different optics compared with the endoscopic camera; such cameras might be used in a stereo-vision application. Similar to many other surgical cameras, the da Vinci® surgical laparoscope is a fixed focus camera. The webcam was set to fixed focus mode using its supplied control software. A passive Spectra optical tracking system (Northern Digital (NDI), Waterloo, ON, Canada) was used as the spatial measuring system. An optical DRF was rigidly attached to the top of the webcam (Fig. 1a) and to the handle of the laparoscope (Fig. 1b), where the distance between the DRF and the surgical camera is ∼80 cm. The intrinsic parameters of the cameras, namely the camera matrix () and the lens distortion model (K), were determined using Zhang's method [17] as implemented in the MATLAB Computer Vision System Toolbox (The MathWorks, MA, USA). All images were undistorted by K prior to any further image processing (Fig. 3b).
Fig. 1

Experimental setup

a Webcam with an optical DRF rigidly attached

b Laparoscope camera (da Vinci S surgical system) with an optical DRF rigidly attached

c Ball-tip stylus as the calibration tool

Fig. 3

Automatic segmentation pipeline for detecting the centre of the ball-tip calibration tool

a Input image

b Undistorted image (note the bottom-left corner as compared with (a)

c Colour thresholding

d Circle detection using Hough transform. The centre of the ith circle having coordinates in the image coordinate frame is used to compute the direction of line of projection in the canonical camera coordinate frame

Experimental setup a Webcam with an optical DRF rigidly attached b Laparoscope camera (da Vinci S surgical system) with an optical DRF rigidly attached c Ball-tip stylus as the calibration tool The hand–eye calibration was formulated as a registration between homologous point–line pairs [12]. An example of performing a calibration measurement under laboratory conditions is shown in Fig. 2. A calibrated ball-tip stylus (Fig. 1b) was used as the calibration tool. The centre of the ball-tip can be calibrated accurately with respect to its DRF using a pivot calibration [18], facilitated by pivoting the ball-tip against an inverted, stationary, cone divot. For the measurement, the 3D location of the ball-tip () in the local coordinate system of the camera DRF and the corresponding image, were acquired as where is the measured pose of the camera DRF in tracking system coordinates, is the measured pose of the calibration tool DRF in tracking system coordinates, and is the calibrated location of the ball-tip centre in tool DRF coordinates. In (1), we assume that the poses returned by the tracking system are expressed as homogeneous matrices.
Fig. 2

Apparatus setup for performing a calibration measurement. The tracking system measures the poses and of the DRFs attached to the calibration tool and laparoscope handle, respectively. The tracked position of the monochromatic ball-tip of the calibration tool in the laparoscope DRF coordinate frame, , is obtained using (1)

Apparatus setup for performing a calibration measurement. The tracking system measures the poses and of the DRFs attached to the calibration tool and laparoscope handle, respectively. The tracked position of the monochromatic ball-tip of the calibration tool in the laparoscope DRF coordinate frame, , is obtained using (1) The ball-tip is projected as a circular pattern on the image, the centre of which can be segmented automatically by first applying a colour thresholding technique followed by Hough transform circle detection (Fig. 3) [16]. Automatic segmentation pipeline for detecting the centre of the ball-tip calibration tool a Input image b Undistorted image (note the bottom-left corner as compared with (a) c Colour thresholding d Circle detection using Hough transform. The centre of the ith circle having coordinates in the image coordinate frame is used to compute the direction of line of projection in the canonical camera coordinate frame A ray emanating from the centre of the camera toward the segmented centre of projection, after hand–eye calibration, should intersect with . For each segmented centre of the circle a line orientation can be constructed using the camera matrix and and are the focal and principal points of the pinhole camera, respectively. The line orientation is defined in the canonical camera coordinate frame, where the origin of the frame is the centre of projection and the z-axis of the frame is the optical axis of the camera. For each calibration, data acquisitions of a homologous pair are recorded. The hand–eye calibration solution is given by the point–line registration that aligns the camera DRF coordinate frame to the canonical camera coordinate frame. Since all rays share a common origin, this registration problem is identical to the perspective-n-point problem commonly encountered in computer vision, for which accurate solutions exist [12]. A TRE estimation model for point–line registration with heteroscedastic FLE was previously introduced [12, 15]. It can be encapsulated as where the scalar TRE for a target located at in canonical camera coordinates can be estimated based on n calibration measurements consisting of the 3D location of the point fiducials , their FLE covariances , and line orientations . Using (3), the predicted TRE of the calibration can be assessed as soon as a new measurement is acquired. By optimising fiducial location (and consequently ) for the subsequent measurement, one can guide the acquisition of calibration measurements, so that the predicted TRE for a specified target location is minimised. For guided calibration, we are interested in changes in the predicted TRE values rather than the actual TRE values themselves; therefore, the FLE covariances can be specified with an arbitrary (positive) constant scale factor. Lacking a good model of FLE noise, we assume identical isotropic FLE noise and set the FLE covariances to be equal to the identity matrix.

Experimental validation

To simulate different heuristics for data acquisition, we acquired a total of images in batch, where we measured the ball-tip fiducial at locations throughout the viewing frustum of the camera. For the webcam, we collected images and for the endoscopic camera we collected images. As a gold standard calibration is difficult to establish, we constructed a bronze standard registration computed using an iterative closest point–point-to-line registration algorithm [12] and all calibration measurements. The root-mean-squared (RMS) distance between registered 3D points and lines was 2.15 mm for the webcam and 0.82 mm for the endoscopic camera. The measurements transformed using into the canonical camera coordinate frame are shown in Figs. 4a–b.
Fig. 4

Measurements transformed using into the canonical camera coordinate frame

a–b Calibration fiducial measurements in canonical camera coordinates

c–d Canonical frustums partitioned into 27 zones in 3 ranges (near, middle, and far)

Measurements transformed using into the canonical camera coordinate frame a–b Calibration fiducial measurements in canonical camera coordinates c–d Canonical frustums partitioned into 27 zones in 3 ranges (near, middle, and far) We performed three simulation experiments with the collected measurements. The first experiment simulated unguided calibration using measurements sampled approximately uniformly within the frustum. The canonical camera frustum was divided into 27 zones (Figs. 4c–d), and the zones were transformed into the camera DRF frame using . For each simulation trial, six measurements were randomly chosen from the near range of zones to initialise the calibration. An additional 19 measurements were added one at a time, each time computing a new registration. Measurements were added by selecting one random point from each of the 18 zones in the middle and far ranges (for a total of 24 measurements). A 25th independent measurement was chosen without duplication from a random zone in the middle and far ranges. The second experiment used guided calibration to select the measurements. Target points were placed every 50 mm throughout the canonical frustum of the webcam and every 10 mm for the endoscopic camera. For each simulation trial, six measurements were randomly chosen from the near range of zones to initialise the calibration. An additional 19 measurements were added one at a time, each time computing a new registration. Before adding measurement (n + 1), we used the registration computed using n measurements to transform the n measurements into the canonical camera coordinate frame; TRE magnitude was predicted at all target locations in the canonical frustum using (3) with the transformed measurements and their corresponding lines of projection to find the target location having the greatest predicted TRE. We then transformed the remaining calibration measurements into the canonical camera frame using . From the measurements, we chose that which minimised the predicted TRE [computed using (3)] at . The third experiment simulated unguided calibration poorly performed. It was performed similarly to the first experiment, except that measurements were randomly chosen without duplication from the central-most zone of the middle range, and measurements were randomly chosen without duplication from the central-most zone of the far range. A total of 1000 trials were performed for each simulation in the three experiments. TRE magnitude was computed at the centre of frustum, and the 5th, 25th, 50th, 75th, and 95th percentiles computed. TRE was computed every 5 mm in the xz-plane of the canonical frustum for each value of n measurements in each trial. RMS TRE over all of the trials was computed at each target location.

Results

The sampling schemes for each experiment yielded different spatial distributions of calibration measurements (Fig. 5). Using guided calibration tended to produce spatial distributions of measurements clustered near the left and right edges of the near and far planes of the frustum (Figs. 5c–d).
Fig. 5

Examples of calibration measurements produced in the three validation experiments

a–b Unguided uniform sampling

c–d Guided calibration

e–f Unguided uniform sampling from the central zones of the middle and far ranges

Examples of calibration measurements produced in the three validation experiments a–b Unguided uniform sampling c–d Guided calibration e–f Unguided uniform sampling from the central zones of the middle and far ranges Fig. 6 shows statistics of the TRE magnitude computed at the centre of the canonical frustum. For the endoscopic camera, unguided sampling from the central zones produced the greatest TRE values and the widest variation in the 5th–95th percentiles of TRE magnitude. Uniform sampling produced a smaller median TRE at the frustum centre (0.45 mm at n = 25) compared with guided calibration (0.75 mm at n = 25). Guided calibration produced the smallest range of the 5th–95th percentiles of TRE magnitude.
Fig. 6

TRE magnitude at the canonical frustum centre as a function of the number of calibration measurements

a–b Unguided uniform sampling

c–d Guided calibration

e–f Unguided uniform sampling from the central zones of the middle and far ranges. Note the change in scale of the vertical axis in (f)

TRE magnitude at the canonical frustum centre as a function of the number of calibration measurements a–b Unguided uniform sampling c–d Guided calibration e–f Unguided uniform sampling from the central zones of the middle and far ranges. Note the change in scale of the vertical axis in (f) For the webcam, unguided sampling from the central zones produced the greatest TRE values and the widest variation in the 5th–95th percentiles of TRE magnitude. Guided calibration produced a smaller median TRE at the frustum centre (0.45 mm at n = 25) compared with uniform sampling (0.77 mm at n = 25). Guided calibration produced the smallest range of the 5th–95th percentiles of TRE magnitude. Figs. 7 and 8 show contour plots of RMS TRE computed in the plane y = 0 of the canonical frustum, where the difference in RMS TRE is 0.05 mm between each pair of adjacent contour lines.
Fig. 7

RMS TRE in the plane y = 0 of the canonical frustum of the endoscopic camera for n = 17 (left column) and n = 25 (right column) calibration measurements

a–b Unguided uniform sampling

c–d Guided calibration

e–f Unguided uniform sampling from the central zones of the middle and far ranges. The difference in RMS TRE is 0.05 mm between each pair of adjacent contour lines. White text indicates the minimum value of TRE in the plane

Fig. 8

RMS TRE in the plane y = 0 of the canonical frustum of the webcam for n = 17 (left column) and n = 25 (right column) calibration measurements

a–b Unguided uniform sampling

c–d Guided calibration

e–f Unguided uniform sampling from the central zones of the middle and far ranges. The difference in RMS TRE is 0.05 mm between each pair of adjacent contour lines. White text indicates the minimum value of TRE in the plane

RMS TRE in the plane y = 0 of the canonical frustum of the endoscopic camera for n = 17 (left column) and n = 25 (right column) calibration measurements a–b Unguided uniform sampling c–d Guided calibration e–f Unguided uniform sampling from the central zones of the middle and far ranges. The difference in RMS TRE is 0.05 mm between each pair of adjacent contour lines. White text indicates the minimum value of TRE in the plane RMS TRE in the plane y = 0 of the canonical frustum of the webcam for n = 17 (left column) and n = 25 (right column) calibration measurements a–b Unguided uniform sampling c–d Guided calibration e–f Unguided uniform sampling from the central zones of the middle and far ranges. The difference in RMS TRE is 0.05 mm between each pair of adjacent contour lines. White text indicates the minimum value of TRE in the plane For the endoscopic camera, guided calibration produced greater RMS TRE values compared with uniform sampling, but the variation in RMS TRE over the frustum plane was smaller (i.e. the gradient of RMS TRE was smaller for guided calibration). Unguided calibration using only the central region of the frustum produced the highest RMS TRE values. For the webcam, guided calibration produced the smallest RMS TRE values and the smallest gradient of RMS TRE values.

Discussion/conclusion

We present a novel paradigm of ‘guided hand–eye calibration’, where a TRE prediction model is used to guide fiducial placement between successive measurements and the predicted TRE of the calibration is assessed for each new measurement. In this manner, accurate calibration can be achieved using between 15 and 20 measurements, compared with the many tens or hundreds of images cited in the robotics literature [6]. The use of a monochromatic ball as a calibration fiducial allows for reliable automatic segmentation. The image processing pipeline is automatic, and the TRE-based guidance has the potential to eliminate user variability in calibration data acquisition. The potential of reduced computation, user dependence, as well as real-time accuracy assessment may be advantages for clinical deployment. Our image processing pipeline is automatic, but we assume that certain conditions hold during image acquisition. These assumptions include that the entire calibration ball is present in the image that there is high contrast between the ball and the background, and there is no motion blur. Under these conditions, we expect that the accuracy of circle centre localisation using the circle Hough transform to be ∼1 pixel [19]. Our method assumes that the intrinsic parameters of the camera (the camera matrix and lens distortion model) can be reliably determined via camera calibration. It also assumes that the distortion model adequately describes the distortion of the camera optics. Our heuristic for guided calibration was to add the next calibration measurement that would minimise the predicted TRE at the target location in the frustum with the current greatest predicted TRE, but any other heuristic could be used. For example, one might choose to minimise TRE at a specific target location or minimise the average TRE over a specific region of the frustum. The simulation results suggested that the isocontours of TRE for point–line registration are approximately elliptic with the minimum value occurring near the centroid of the measurements, which is consistent with the results for paired-point registration [14, 20]. Under TRE-guided acquisition (Fig. 5b), the fiducial configuration approximates the maximal extent of the viewing frustum, which can be explained by the elliptic distribution of TRE. To minimise the TRE for a moving target between measurements, the guided fiducial location will tend to coincide with the periphery of the viewing frustum because we must shift the centroid of the measurements closer to the moving target. If a different heuristic is used, the guided fiducial configuration will be different. Possible future research would be to explore the relationship between TRE minimising heuristics and fiducial configuration, which may further improve calibration performance. Guided calibration produced calibrations that converged to a slightly different transformation than the bronze standard when using the endoscopic camera. This may have been caused by the large amount of barrel distortion in the endoscopic camera. Any errors in the intrinsic calibration of the lens distortion model would produce errors in the segmented image location of the ball fiducial; these errors would increase in magnitude near the edges of the images. Guided calibration tends to choose calibration measurements from the periphery of the frustum which correspond to image locations near the image edge. When using the webcam, which had very little barrel distortion, guided calibration produced calibrations that converged near to the bronze standard. Unguided uniform sampling can produce usable calibrations (Figs. 6–8a–b), whereas sampling only from the central regions of the frustum produces poor calibrations (Figs. 6–8e–f). Uniform sampling produced more consistent calibrations than guided calibration when fewer than 12 measurements were used (as shown by the high 95th percentile TRE values for guided calibration in Figs. 6c–d). This can be explained by the fact that guided calibration tends to add new measurements at the extreme limits of the frustum which can cause abrupt changes in the estimated registration because of large changes in the spatial distribution of the measurements. Our implementation of uniform sampling methodically added new measurements working from the middle to far ranges resulting in a gradual change in the spatial distribution of the measurements. Guided calibration in practise would be performed slightly differently than we described in Section 2.1. We collected data in batch in order to perform statistical analysis of the TRE behaviour and to provide a standardised dataset for both the unguided and guided calibration. Since we collected all of our calibration data in a batch fashion, we could not actually guide the collection of the next calibration measurement, and instead, we had to select the next calibration measurement from those we had collected. In practise, we would sample the canonical frustum, choose the sample that would minimise some function of TRE, and apply the inverse of the current registration estimate to the samples to guide the collection of the next calibration measurement in the DRF frame of the camera. Note that we do not need to guide the 6D pose of the calibration fiducial; the choice of a ball fiducial means that only the position of the calibration fiducial must be guided. Our simulations included the effect of using the (possibly inaccurate) current registration estimate when choosing the next calibration measurement. Implementing an effective and accurate hand–eye calibration system for intraoperative use is a possible future direction. From our previous work, we observe that TRE decreases monotonically, reaching a plateau after 12–15 measurements are acquired [16]. From this work, we observed that guided calibration tends to suggest fiducial location close to the periphery of the viewing frustum. To achieve accurate intraoperative hand–eye calibration using minimal measurements, one possible measuring heuristic would be to collect 4–5 measurements at each of the near, mid, and far plane of the viewing frustum, while keeping the fiducial location spread-out and near the image edge. Using guided calibration, our results, which are based on real measurements, suggested that millimetre TRE can be achieved using between 15 and 20 measurements (Figs. 7–8c–d) for a target located ∼120 mm from the endoscopic camera. Similarly, sub-millimetre TRE can be achieved for the webcam using between 15 and 20 measurements. As the true FLE model is difficult to characterise, we employed an isotropic FLE noise for our experiments even though the TRE estimation model [12] is capable to incorporate heteroscedastic FLE. As our guided calibration already achieved millimetre TRE in our experiments, the utilisation of the more complicated heteroscedastic FLE may not provide significant improvement for guided calibration.
  7 in total

1.  Branch-and-bound methods for euclidean registration problems.

Authors:  Carl Olsson; Fredrik Kahl; Magnus Oskarsson
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2009-05       Impact factor: 6.226

Review 2.  Augmented reality partial nephrectomy: examining the current status and future perspectives.

Authors:  Archie Hughes-Hallett; Erik K Mayer; Hani J Marcus; Thomas P Cundy; Philip J Pratt; Ara W Darzi; Justin A Vale
Journal:  Urology       Date:  2013-10-19       Impact factor: 2.649

3.  Predicting error in rigid-body point-based registration.

Authors:  J M Fitzpatrick; J B West; C R Maurer
Journal:  IEEE Trans Med Imaging       Date:  1998-10       Impact factor: 10.048

4.  Estimation of optimal fiducial target registration error in the presence of heteroscedastic noise.

Authors:  Burton Ma; Mehdi H Moghari; Randy E Ellis; Purang Abolmaesumi
Journal:  IEEE Trans Med Imaging       Date:  2010-03       Impact factor: 10.048

5.  Hand-eye calibration for surgical cameras: a Procrustean Perspective-n-Point solution.

Authors:  Isabella Morgan; Uditha Jayarathne; Adam Rankin; Terry M Peters; Elvis C S Chen
Journal:  Int J Comput Assist Radiol Surg       Date:  2017-04-19       Impact factor: 2.924

6.  Guided ultrasound calibration: where, how, and how many calibration fiducials.

Authors:  Elvis C S Chen; Terry M Peters; Burton Ma
Journal:  Int J Comput Assist Radiol Surg       Date:  2016-04-02       Impact factor: 2.924

7.  Hand-eye calibration for rigid laparoscopes using an invariant point.

Authors:  Stephen Thompson; Danail Stoyanov; Crispin Schneider; Kurinchi Gurusamy; Sébastien Ourselin; Brian Davidson; David Hawkes; Matthew J Clarkson
Journal:  Int J Comput Assist Radiol Surg       Date:  2016-03-19       Impact factor: 2.924

  7 in total
  1 in total

1.  Marker-less real-time intra-operative camera and hand-eye calibration procedure for surgical augmented reality.

Authors:  Megha Kalia; Prateek Mathur; Nassir Navab; Septimiu E Salcudean
Journal:  Healthc Technol Lett       Date:  2019-11-12
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.