| Literature DB >> 32038862 |
Pedro Rodrigues1, Michel Antunes2, Carolina Raposo2, Pedro Marques3, Fernando Fonseca3, Joao P Barreto1,2.
Abstract
Knee arthritis is a common joint disease that usually requires a total knee arthroplasty. There are multiple surgical variables that have a direct impact on the correct positioning of the implants, and an optimal combination of all these variables is the most challenging aspect of the procedure. Usually, preoperative planning using a computed tomography scan or magnetic resonance imaging helps the surgeon in deciding the most suitable resections to be made. This work is a proof of concept for a navigation system that supports the surgeon in following a preoperative plan. Existing solutions require costly sensors and special markers, fixed to the bones using additional incisions, which can interfere with the normal surgical flow. In contrast, the authors propose a computer-aided system that uses consumer RGB and depth cameras and do not require additional markers or tools to be tracked. They combine a deep learning approach for segmenting the bone surface with a recent registration algorithm for computing the pose of the navigation sensor with respect to the preoperative 3D model. Experimental validation using ex-vivo data shows that the method enables contactless pose estimation of the navigation sensor with the preoperative model, providing valuable information for guiding the surgeon during the medical procedure.Entities:
Keywords: RGB cameras; bone; bone surface; computed tomography scan; computer-aided system; computer-aided total knee arthroplasty; deep learning approach; deep segmentation; depth cameras; diseases; geometric pose estimation; image registration; image segmentation; joint disease; knee arthritis; learning (artificial intelligence); magnetic resonance imaging; medical image processing; navigation sensor; navigation system; neural nets; orthopaedics; pose estimation; preoperative 3D model; prosthetics; surgery; surgical flow
Year: 2019 PMID: 32038862 PMCID: PMC6952257 DOI: 10.1049/htl.2019.0078
Source DB: PubMed Journal: Healthc Technol Lett ISSN: 2053-3713
Fig. 1Diagram of the proposed method. Bone segmentation is performed using frames of the RGB camera. The segmentation is used to get the region of interest in the point cloud from the depth sensor. The point cloud is then registered to the anatomical model to establish the relative pose
Fig. 2Femur segmentation results on the validation dataset (left column) and on the dataset without markers (right column), which does not have ground truth available. Green: prediction; blue: ground truth; cyan: both
Fig. 3Segmentation metrics for the femur segmentation in the validation dataset
a Metric distribution
b Frame with worst IoU metric
Fig. 4Registration results in the validation dataset
a Comparison between the x, y, and z components of the trajectories of the proposed registration (colours) and marker-based tracking (black)
b Distribution of the rotation error (eR) in degrees and the translation error (eT) in millimetres
Fig. 5Each row corresponds to a different frame showing the markerless and contactless femur registration for AR-guided surgery using pre-operative planned cuts
Fig. 6Femur segmentation results using video sequences of only one femur for training: good results (left column) and bad results (right column). Green: prediction; blue: ground truth; cyan: both