| Literature DB >> 33601962 |
Katsuma Inoue1, Yasuo Kuniyoshi1, Katsushi Kagaya1, Kohei Nakajima1.
Abstract
Soft continuum bodies have demonstrated their effectiveness in generating flexible and adaptive functionalities by capitalizing on the rich deformability of soft material. Compared with a rigid-body robot, it is in general difficult to model and emulate the morphology dynamics of a soft continuum body. In addition, a soft continuum body potentially has an infinite degree of freedom, requiring considerable labor to manually annotate its dynamics from external sensory data such as video. In this study, we propose a novel noninvasive framework for automatically extracting the skeletal dynamics from video of a soft continuum body and show the applications and effectiveness of our framework. First, we demonstrate that our framework can extract skeletal dynamics from animal videos, which can be effectively utilized for the analysis of soft continuum body including animal motion. Next, we focus on a soft continuum arm, a commonly used platform in soft robotics, and evaluate the potential information-processing capability. Normally, to control such a high-dimensional system, it is necessary to introduce many sensors to completely capture the motion dynamics, causing the deterioration of the material's softness. We illustrate that the evaluation of the memory capacity and sensory reconstruction error enables us to verify the minimum number of sensors sufficient for fully grasping the state dynamics, which is highly useful in designing a sensor arrangement for a soft robot. Also, we release the software developed in this study as open source for biology and soft robotics communities, which contributes to automating the annotation process required for the motion analysis of soft continuum bodies.Entities:
Keywords: computer vision; continuum robot; morphological computation
Mesh:
Year: 2021 PMID: 33601962 PMCID: PMC9057898 DOI: 10.1089/soro.2020.0110
Source DB: PubMed Journal: Soft Robot ISSN: 2169-5172 Impact factor: 7.784
Comparison with Other Markerless Methods That Can Skeletonize Soft Continuum Bodies ( the Number of Frames)
| Method | Algorithm | Pretraining | Resolution | Manual specification of tip points |
|---|---|---|---|---|
| Octopus arm tracking system[ | Thinning algorithm[ | Not required | Adjustable | |
| DeepLabCut[ | ResNet[ | Required | Fixed | not required after the pretraining |
| Soft Skeleton Solver (ours) | Fast marching method[ | Not required | Adjustable |
FIG. 1.Detailed description of the algorithm. The algorithm has three steps. Each step processes data from left to right figure (i–iii). (A) Basal point estimation. In this demonstration using the brittle star video, the farthest point from the edge was selected as the basal point. (B) Tip point estimation. Five tip points were estimated corresponding to the five arms in this demonstration. (C) Skeletal curve estimation. The skeletal curve was estimated to connect the basal point and the tip points. The solution was obtained by backtracking along the gradient of the traveling time field. Color images are available online.
FIG. 2.Demonstration of skeletal dynamics extraction. (A) Extracted spine dynamics of the dead trout. We used a video published in the study of Beal et al.[41] and the manually annotated basal points. The XY-coordinate dynamics of each point on the spine are plotted (the colormap shows the 1000-dimensional dynamics of XY-coordinates from the head to the tail). The skeletal curve and tip point were automatically extracted by our framework (Supplementary Video S3). (B) Skeletal dynamics of five-armed brittle star and its movement analysis. We used a brittle star video published in the study of Wakita et al.[42] The skeletal curve and endpoints were automatically extracted by our framework (Supplementary Video S4). Each arm is indexed in clockwise order from the upper one. The left colormap plots the dynamics of the relative coordinates . The right colormap shows the correlation matrix of velocity . Color images are available online.
FIG. 3.Soft octopus arm setup. (A) Overview of soft octopus arm.[45] (B) Experimental setup for evaluation of the information-processing capability. (C) Dynamics of the extracted skeletal dynamics and corresponding contact vector. The color represents the angle of the tangent vector. (D) Response curve comparison. Top: 10 bend sensor dynamics . Each label represents the index of the sensor. Middle: Input time series . Bottom: Dynamics of the angle of the tangent vector . The y-axis in the colormap corresponds to the position on the soft octopus arm (i.e., #1: the top basal point, #10,000: the bottom tip point). Refer also the Supplementary Video S6. Color images are available online.
FIG. 4.(A) Results of the short-term memory task. Left: Performance function. The averaged mutual information values over 10 trials are plotted. with 10 bend sensors are plotted (black dotted line). The number in the labels denotes the dimension of selected dynamics. Right: Performance capacity (). The dotted line shows the capacity with the actual sensory data. The error bar shows the standard deviation over 10 trials. (B) Reconstruction of sensor time series. Left: Reconstruction of the 10 bend sensor dynamics. The figure displays both the actual sensor values (dotted) and predicted values by the linear model (red). Right: Reconstruction error. The sum of NMSEs for the 10 sensor dynamics is plotted. The error bar represents the standard deviation over 10 trials. NMSE, normalized mean square error. Color images are available online.
|
|
|---|
| 1: |
| 2: |
| 3: |
| 4: |
| 5: |
| 6: |