Literature DB >> 33601962

Skeletonizing the Dynamics of Soft Continuum Body from Video.

Katsuma Inoue1, Yasuo Kuniyoshi1, Katsushi Kagaya1, Kohei Nakajima1.   

Abstract

Soft continuum bodies have demonstrated their effectiveness in generating flexible and adaptive functionalities by capitalizing on the rich deformability of soft material. Compared with a rigid-body robot, it is in general difficult to model and emulate the morphology dynamics of a soft continuum body. In addition, a soft continuum body potentially has an infinite degree of freedom, requiring considerable labor to manually annotate its dynamics from external sensory data such as video. In this study, we propose a novel noninvasive framework for automatically extracting the skeletal dynamics from video of a soft continuum body and show the applications and effectiveness of our framework. First, we demonstrate that our framework can extract skeletal dynamics from animal videos, which can be effectively utilized for the analysis of soft continuum body including animal motion. Next, we focus on a soft continuum arm, a commonly used platform in soft robotics, and evaluate the potential information-processing capability. Normally, to control such a high-dimensional system, it is necessary to introduce many sensors to completely capture the motion dynamics, causing the deterioration of the material's softness. We illustrate that the evaluation of the memory capacity and sensory reconstruction error enables us to verify the minimum number of sensors sufficient for fully grasping the state dynamics, which is highly useful in designing a sensor arrangement for a soft robot. Also, we release the software developed in this study as open source for biology and soft robotics communities, which contributes to automating the annotation process required for the motion analysis of soft continuum bodies.

Entities:  

Keywords:  computer vision; continuum robot; morphological computation

Mesh:

Year:  2021        PMID: 33601962      PMCID: PMC9057898          DOI: 10.1089/soro.2020.0110

Source DB:  PubMed          Journal:  Soft Robot        ISSN: 2169-5172            Impact factor:   7.784


Introduction

Living organisms incorporate elastic body tissues to realize smooth and adaptive behavior in uncertain environments. Motivated by the ubiquity of soft structures in creatures, soft robots have been developed that incorporate the deformability of soft material.[1,2] In addition, the diverse spatiotemporal pattern of soft continuum bodies has recently been highlighted as a novel tool for implementing adaptive behavioral controllers,[3-6] sensors,[7-12] and information-processing devices.[13-15] To sum, the dynamic property of soft material will be exploited to realize the versatile functionality in developing next-generation robots. It is, however, challenging to quantitatively capture skeletal dynamics of a soft continuum body in biology and soft robotics. Unlike a conventional rigid-body robot, soft continuum bodies are often continuous, and modeling their dynamics potentially requires an infinite state space. Owing to the intrinsic nonlinearity and hysteresis of soft materials, soft continuum bodies generate a rich variety of dynamic deformation patterns when actuated, making it difficult to construct a precise model describing the deformation dynamics.[16,17] Moreover, the morphology displacement may be able to be measured by embedded sensors. However, the implanted sensors often impair a material's softness, limiting the number to be used. Therefore, to completely grasp the deformation dynamics of a soft continuum body, it is desirable to extract the skeletal dynamics by noninvasive external sensors such as video cameras or laser rangefinders. In the field of computer vision and imaging science, skeletonization has been an important topic for finding compact representations of objects from the image for many years.[18] Blum's pioneering work[19] first formulated the concept of object skeletons and established the foundation of skeletonization. Blum's skeleton is obtained by the grassfire transform process and analytically defined as a set of collision points of two independent curves propagating from the object boundary at a constant velocity.[20] Based on the grassfire transform process, many approaches have been developed, including geometric approaches approximating the skeleton using the Voronoi diagram[21-23] and continuous curve propagation approaches emulating grassfire propagation with partial differential equations.[24-27] The skeletonization technique has been widely employed in various image processing and computer vision applications. In particular, medical imaging widely uses skeletonization to extract the centerline of blood vessels and arteries from computed tomography imaging.[28,29] Many frameworks have also been proposed to extract skeletal dynamics from the video recording of the motion of a soft continuum body. To analyze the complicated behavior of an octopus, a framework for extracting a three-dimensional (3D) arm trajectory was developed using multiple video cameras.[30,31] The skeletonization algorithm is easily accomplished by simulating Blum's grassfire process on the digital grids. By parameterizing the contour with elliptic Fourier descriptors, it is possible to describe the morphology dynamics of soft continuum bodies.[37] Also, deep neural network (DNN) models that track characteristic points on video have recently been proposed, which would be powerful options for skeletonizing soft continuum bodies.[33,38] Although these approaches based on computer vision are useful in skeletonizing soft continuum body dynamics, they have several drawbacks. For example, the endpoint coordinates of the skeleton should be manually specified for all video frames in the octopus arm tracking system used in Refs.[30,31] The model-free method based on the elliptic Fourier descriptors[37] is not suitable for extracting skeleton dynamics because it does not provide direct information of the skeletal coordinates. It is necessary to prepare annotation data and fine-tune the model in the methods based on DNN (Table 1).[33] Also, the markers were directly attached or written on the soft continuum body as a reference point,[8,14,39,40] which involves an invasive process and cannot be used, especially with animals.
Table 1.

Comparison with Other Markerless Methods That Can Skeletonize Soft Continuum Bodies ( the Number of Frames)

MethodAlgorithmPretrainingResolutionManual specification of tip points
Octopus arm tracking system[30,31]Thinning algorithm[32]Not requiredAdjustableO(N) (required for every frames)
DeepLabCut[33]ResNet[34]RequiredFixednot required after the pretraining
Soft Skeleton Solver (ours)Fast marching method[27,35,36]Not requiredAdjustableO(1) (only first frame)
Comparison with Other Markerless Methods That Can Skeletonize Soft Continuum Bodies ( the Number of Frames) In this study, we propose a novel framework called SSS (Soft Skeleton Solver) for skeletonizing soft continuum body dynamics based on a background subtraction algorithm and a skeletonization algorithm[36] using a fast marching method (FMM).[35] By employing the minimum distance field and the traveling time field calculated during the skeletonization algorithm, our framework can effectively and automatically extract the endpoint coordinates and skeleton curve of the soft continuum body on all frames except the first one. Furthermore, by specifying the resolution and tracking parameters, it is possible to extract the skeleton curve with arbitrary accuracy. Below, we list the contributions of this article: Our proposed method automates the annotation process of specifying the skeleton's tip points, which significantly enhance the extraction efficiency and reduce the manual operation costs. Unlike skeletonization methods based on DNN, our proposed method does not require pretraining, which alleviates the annotation and training costs. We demonstrate that our methods can fully capture the deformation dynamics of soft bodies in a noninvasive manner, which could be effectively employed for designing the optimal sensor placement. In this article, we first demonstrate that our framework can extract skeletal dynamics from dead fish “swimming” and brittle star movement videos. We also show that both the microscopic and macroscopic features of the animal motion are effectively reflected in the analysis. In addition, we verify the minimum number of sensors sufficient for fully grasping the state dynamics of a soft silicone rubber arm, a typical platform in soft robotics, from the video. Normally, to completely capture the deformation dynamics, a sufficient number of sensors should be embedded in the body. However, implanting sensors in the soft components often reduces its deformability and motion variety. We exhibit that our framework effectively offers a noninvasive indicator to design the sensor arrangement on a soft robot through the two demonstrations measuring the information-processing capacity and the reconstruction error of the actual sensor dynamics. Finally, we discuss the usefulness and future extension of our framework. Our software used in this study is open source and released on a website, which should be especially helpful for biologists and soft roboticists who wish to analyze the dynamic movement of soft continuum bodies.

Proposed Method

In this study, we propose an iterative skeletonizing framework for soft continuum bodies composed of the following three steps (Fig. 1A–C). The skeletonization process is automatically completed for all frames except the first one by extending the centerline estimation algorithm[36] based on the FMM algorithm. We explain the detailed algorithm of each step through a demonstration with a five-armed brittle star video (Fig. 1). Five skeletal curves should be extracted in this demonstration.
FIG. 1.

Detailed description of the algorithm. The algorithm has three steps. Each step processes data from left to right figure (i–iii). (A) Basal point estimation. In this demonstration using the brittle star video, the farthest point from the edge was selected as the basal point. (B) Tip point estimation. Five tip points were estimated corresponding to the five arms in this demonstration. (C) Skeletal curve estimation. The skeletal curve was estimated to connect the basal point and the tip points. The solution was obtained by backtracking along the gradient of the traveling time field. Color images are available online.

Detailed description of the algorithm. The algorithm has three steps. Each step processes data from left to right figure (i–iii). (A) Basal point estimation. In this demonstration using the brittle star video, the farthest point from the edge was selected as the basal point. (B) Tip point estimation. Five tip points were estimated corresponding to the five arms in this demonstration. (C) Skeletal curve estimation. The skeletal curve was estimated to connect the basal point and the tip points. The solution was obtained by backtracking along the gradient of the traveling time field. Color images are available online.

Basal point estimation

First, one of the two endpoints of the skeletal curve is estimated (referred to as the basal point). Initially, the region of target object is extracted from the raw image (Fig. 1A-i, ii). Here, we used a simple background subtraction algorithm to binarize the image I based on the pixel values with an appropriate threshold. Note that the extracted region is assumed to be simply connected, that is, enclosed by a single closed curve and having no holes in the region. Next, the minimum Euclid distance field is calculated from the contour of [ is a grid coordinate; represents the distance between and the nearest boundary of ]. By applying the FMM algorithm, the computational complexity for the calculation can be suppressed to for grid number HW (H and W are the height and width of the video, respectively). Based on the distance field , the coordinate of basal point is estimated. Since the points on the skeletal curve are distributed on the ridge line of the distance field, the basal point can be estimated from the local maximum point of . Especially in this brittle star demonstration, five ridge lines corresponding to the five arms intersect at the maximum point of distance field . Therefore, we selected the local maximum point of on the -neighborhood of the previous basal point as the next basal point ( is set to an appropriate value according to the video size). Note that this estimation algorithm can be flexibly modified or replaced with another one depending on the target object morphology. For example, a similar algorithm was used by fixing the Y-coordinate of the basal point for the soft octopus arm video presented in the Designing Sensory Configuration for Soft Robotic Arm section (see the Appendix for detail). Also, the previous basal point is manually set for the initial basal point on the first frame I1. We developed a user interface to set the endpoint coordinates for the first frame (Supplementary Videos S1 and S2).

Tip point estimation

Next, another endpoint of the skeletal curve is estimated (referred to as the tip point). First, the following speed vector field is calculated based on the distance field with the following equation (Fig. 1B-i): where is a constant value adjusting the convexity of (we used for calculation stability). Next, consider a closed curve propagating normal to itself with the speed from the wave source . Then, the traveling time field is calculated. denotes a time when passes over . Especially in a special case where the wave front moves in one direction with the velocity F, the relationship between and can be formulated with the following equation: This is called the eikonal equation, whose solution can be efficiently acquired by the FMM algorithm as with the calculation of the distance field.[36] After the calculation of , the tip point is estimated (Fig. 1B-ii). Here, n is an arm index ( in this demonstration). The travel time field takes a local maximum value at the farthest points from the wave source . Therefore, we selected the local maximum point of as the new tip point on the -neighborhood of the previous tip point . Also, the previous tip points of on the first frame I1 are manually set, and those on the rest of the frames are automatically gained.

Skeletal curve extraction

Finally, the point sequences distributed at regular intervals on the skeletal curve is extracted by connecting the basal point and each tip point (below, the arm index n is omitted for simplicity). We consider the following skeletal curve C minimizing the accumulate value of the cost function among the curves connecting and : The minimum cost path between and is found by backtracking along a gradient from until reaching (Fig. 1C-i, ii).[36] The second-order Runge–Kutta method (RK2) was used for approximating the gradient with a constant width in Algorithm 1. This algorithm yields a point sequence on the skeletal curve C. Note that does not necessarily match . After the backtracking process, the smoothed point sequence is obtained from point sequence P with the following smoothing algorithm: where K is a constant value adjusting the smoothing strength. Then, Q is interpolated to construct a smoothed curve (a cubic interpolation was used). Finally, the point sequence R on is reconstructed to satisfy and for all . The skeletal resolution can be arbitrarily set by adjusting the width in the gradient-descent process and the number of points . Through the iterative algorithm repeating the above three steps, our framework can automatically extract the skeletal dynamics of soft continuum bodies.

Case Studies

In this section, we demonstrate the applications of our framework to biological and soft robotic data.

Analysis of biological data

First, we demonstrate the effectiveness of our framework by skeletonizing animal movement. We prepared a dead trout swimming video (published in Refs.,[41,43] 178 frames) as a simple task and extracted the spine dynamics. This video displays the ability of a dead fish body to swim upstream by employing the Karman vortices generated by a D-shaped obstacle. In this demonstration, we used the manually annotated time series data of the head position as the basal point dynamics , and the skeletal dynamics and tail position were automatically extracted with our framework. Figure 2A shows the extracted spine dynamics (). As can be seen from the figure, our framework effectively extracted the continuum spine dynamics (Supplementary Video S3).
FIG. 2.

Demonstration of skeletal dynamics extraction. (A) Extracted spine dynamics of the dead trout. We used a video published in the study of Beal et al.[41] and the manually annotated basal points. The XY-coordinate dynamics of each point on the spine are plotted (the colormap shows the 1000-dimensional dynamics of XY-coordinates from the head to the tail). The skeletal curve and tip point were automatically extracted by our framework (Supplementary Video S3). (B) Skeletal dynamics of five-armed brittle star and its movement analysis. We used a brittle star video published in the study of Wakita et al.[42] The skeletal curve and endpoints were automatically extracted by our framework (Supplementary Video S4). Each arm is indexed in clockwise order from the upper one. The left colormap plots the dynamics of the relative coordinates . The right colormap shows the correlation matrix of velocity . Color images are available online.

Demonstration of skeletal dynamics extraction. (A) Extracted spine dynamics of the dead trout. We used a video published in the study of Beal et al.[41] and the manually annotated basal points. The XY-coordinate dynamics of each point on the spine are plotted (the colormap shows the 1000-dimensional dynamics of XY-coordinates from the head to the tail). The skeletal curve and tip point were automatically extracted by our framework (Supplementary Video S3). (B) Skeletal dynamics of five-armed brittle star and its movement analysis. We used a brittle star video published in the study of Wakita et al.[42] The skeletal curve and endpoints were automatically extracted by our framework (Supplementary Video S4). Each arm is indexed in clockwise order from the upper one. The left colormap plots the dynamics of the relative coordinates . The right colormap shows the correlation matrix of velocity . Color images are available online. Next, we exhibit that our framework can be applied to skeletonizing a multi-armed object. We prepared a five-armed brittle star (Ophiactis brachyaspis) video (published in Ref.,[42] 843 frames). It was reported that brittle star randomly selects the leading arm opposite to one that is stimulated and has a tendency to move forward in the direction of the leading arm while synchronizing the bilateral arms adjacent to the leading arm.[42] Especially, there exist two candidate arms to be selected in response to the external stimulus in a five-armed brittle star. In the video, the tester provided a stimulus to the tip of arm #5 (purple). Then, the brittle star selected arm #2 (orange) as the leading arm until around the 500th frame, and arm #3 (green) after that. Figure 2B shows the relative coordinates dynamics () of each arm and its movement analysis. Note that the skeletal dynamics for all the frames were automatically extracted except for the first one. Positive correlation values were globally obtained on the two correlation matrices; one between arm #1 and arm #3 and another between arm #2 and arm #4 (surrounded by a dotted line in Fig. 2B), which is consistent with the observed leading arm selection and synchronized arm movement (Supplementary Video S4). In this way, our framework efficiently extracts the skeletal dynamics of the soft continuum bodies and provides useful information for understanding the comprehensive movement of animals. Also, our proposed method can be used to skeletonize fuzzy soft body objects. We prepared a video recording the behavior of Hydra vulgaris,[44] whose body is semitransparent and thus generally hard to skeletonize. Our proposed method is, however, applicable to skeletonizing such a blurry body when the background is homogeneous since a simple binarization process can extract the semitransparent object (Supplementary Video S5). In this way, our proposed method can extract the object structure more robustly in a case where the background environment can be easily controlled (e.g., in a laboratory environment).

Designing sensory configuration for soft robotic arm

Next, we demonstrate that our framework offers a noninvasive indicator for designing the sensory configuration of a soft robot. Here, we prepared a movie recording of soft octopus arm movement (published in Refs.,[15,46] 36,321 frames). The soft octopus arm is a typical soft continuum robotic body in which a servomotor and 10 bend sensors are attached to a silicone rubber arm (Fig. 3A). A soft continuum arm is a commonly used platform in the field of soft robotics.[13,14,47] Also, the soft octopus arm is a primary mechanical device for physical reservoir computing,[5,6,48-53] where the complicated time series responses generated on the soft material are exploited for machine-learning tasks. In particular, the soft octopus arm converts binary motor commands with the switching time interval into continuous sensory dynamics (Fig. 3B, we fixed the interval to ). Originally, 10 bend sensors were embedded to extract complex spatiotemporal deformation patterns [i.e., ], which was insufficient for fully grasping the deformation dynamics. However, owing to reduced flexibility, the number of attachable sensors was limited. We estimated the sufficient number of sensors to capture the deformation dynamics by evaluating the potential information-processing capacity of the soft octopus arm.
FIG. 3.

Soft octopus arm setup. (A) Overview of soft octopus arm.[45] (B) Experimental setup for evaluation of the information-processing capability. (C) Dynamics of the extracted skeletal dynamics and corresponding contact vector. The color represents the angle of the tangent vector. (D) Response curve comparison. Top: 10 bend sensor dynamics . Each label represents the index of the sensor. Middle: Input time series . Bottom: Dynamics of the angle of the tangent vector . The y-axis in the colormap corresponds to the position on the soft octopus arm (i.e., #1: the top basal point, #10,000: the bottom tip point). Refer also the Supplementary Video S6. Color images are available online.

Soft octopus arm setup. (A) Overview of soft octopus arm.[45] (B) Experimental setup for evaluation of the information-processing capability. (C) Dynamics of the extracted skeletal dynamics and corresponding contact vector. The color represents the angle of the tangent vector. (D) Response curve comparison. Top: 10 bend sensor dynamics . Each label represents the index of the sensor. Middle: Input time series . Bottom: Dynamics of the angle of the tangent vector . The y-axis in the colormap corresponds to the position on the soft octopus arm (i.e., #1: the top basal point, #10,000: the bottom tip point). Refer also the Supplementary Video S6. Color images are available online. We prepared for 10,010 points of extracted skeletal dynamics R (). Then, to correspond to the actual sensor, the tangent vectors were calculated from R by the following formula: Figure 3C and D displays the skeletal dynamics and sensory dynamics in response to the binary sequence (Supplementary Video S6). Here, we assumed that these tangent vectors corresponded to three-axis accelerometers measuring the direction of gravity. A three-axis accelerometer is often embedded into soft robotic components to measure the displacement of the material deformation.[54-56] In other words, we estimated the number of required three-axis accelerometers to fully exploit the potential computational resource. To evaluate the information-processing capability, we prepared a short-term memory task that measured the memory property for a random input signal. The short-term memory task requires system to reconstruct past input before n segments, , from current state with a linear logistic regression model.[57] Below, we introduced ; that is, only the dynamics on every steps were considered. The linear weight on the model was trained to approximate as follows: Since the logistic regression model has a minimal nonlinearity and no memory property, the task performance significantly reflects the degree of the computational capability of the system. Here, we introduced mutual information between the output and target as an evaluation measure. takes a value within and approaches one as the performance increases (see the Appendix for the MI algorithm). Also, we calculated the performance capacity to assess the overall computational capacity. We prepared 1250 time steps of training data and 1250 time steps of evaluation data. In addition, to investigate the dependence of the number of sensors on the information-processing capacity, elements among 20,000 nodes in were randomly chosen (we tested and 1000). We also measured the performance with the bend-sensor dynamics as the baseline. Figure 4A depicts the performance function and memory capacity , suggesting that the performance monotonically improved as increased and was comparable with sensory dynamics at . Moreover, the memory capacity saturated at around . These results revealed the following two points: (i) our system brought out the information-processing capability of the soft octopus arm more than the actual 10 bend sensors, and (ii) the computational capacity of the soft octopus arm can be sufficiently extracted by embedding accelerometers.
FIG. 4.

(A) Results of the short-term memory task. Left: Performance function. The averaged mutual information values over 10 trials are plotted. with 10 bend sensors are plotted (black dotted line). The number in the labels denotes the dimension of selected dynamics. Right: Performance capacity (). The dotted line shows the capacity with the actual sensory data. The error bar shows the standard deviation over 10 trials. (B) Reconstruction of sensor time series. Left: Reconstruction of the 10 bend sensor dynamics. The figure displays both the actual sensor values (dotted) and predicted values by the linear model (red). Right: Reconstruction error. The sum of NMSEs for the 10 sensor dynamics is plotted. The error bar represents the standard deviation over 10 trials. NMSE, normalized mean square error. Color images are available online.

(A) Results of the short-term memory task. Left: Performance function. The averaged mutual information values over 10 trials are plotted. with 10 bend sensors are plotted (black dotted line). The number in the labels denotes the dimension of selected dynamics. Right: Performance capacity (). The dotted line shows the capacity with the actual sensory data. The error bar shows the standard deviation over 10 trials. (B) Reconstruction of sensor time series. Left: Reconstruction of the 10 bend sensor dynamics. The figure displays both the actual sensor values (dotted) and predicted values by the linear model (red). Right: Reconstruction error. The sum of NMSEs for the 10 sensor dynamics is plotted. The error bar represents the standard deviation over 10 trials. NMSE, normalized mean square error. Color images are available online. Furthermore, we demonstrate that our framework can estimate the number of required sensors to extract deformation dynamics even without input information . We evaluated the reconstruction error of the bend-sensor dynamics using the extracted skeletal dynamics . A linear ridge regression model was used, and the normalized mean square error (NMSE) between the output and target dynamics was measured with the following equation: where is the average of . Also, the sum of NMSEs over 10 bend-sensor dynamics was calculated to measure the overall reconstruction accuracy. Figure 4B displays the reconstruction error over . The bend-sensor dynamics were effectively reconstructed using the extracted skeletal dynamics. Also, the NMSE evaluation showed that the reconstruction accuracy was monotonically improved and saturates at around , which was consistent with the results of the short-term memory task. In this way, the number of required accelerometers to extract deformation dynamics can be estimated even with sensory information of different modalities. As with the case studies in the Analysis of Biological Data section skeletonizing biological data, our framework, of course, can be used to extract the skeletal dynamics of the other soft robotic systems, which offer useful information for control (see the Supplementary Video S7 skeletonizing a soft manipulator from a video published by Truby et al.[58]). Also, 3D skeletal coordinates can be easily implemented with multiple videos from different viewpoints. We prepared a simple setup recording behavior of the soft octopus arm from two cameras arranged in the vertical direction and reconstructed the 3D arm dynamics (Supplementary Video S8 and Appendix). This 3D skeletonization would help estimate the posture of soft robotic systems.

Discussion

In this article, we proposed a framework for automatically extracting skeletal dynamics from video information of a soft continuum body. Since most of the annotation process are automated compared with the conventional methods, our framework can efficiently skeletonize soft continuum bodies. Also, the skeletal curve can be extracted with arbitrary accuracy by adjusting the tracking width and normalization parameter. In the Analysis of Biological Data section, we exhibited that our framework efficiently extracted the skeletal dynamics from the video recording of animal movement. We showed that our framework can simultaneously extract multiple skeletal dynamics through a demonstration with a brittle star video. Also, we illustrated that our framework is applicable to analyze animal behavior. In the brittle star demonstration, for example, we demonstrated that the macroscopic arm movements of the five-armed brittle star were effectively reflected in the analysis with correlation matrices. We show that O. brachyaspis continuously switches the leading arm, which was neither quantitatively evaluated nor visualized in a previous study.[42] The extracted arm dynamics of H. vulgaris can be used to study the mechanism of the behavioral generation when it is analyzed with the imaged cell activity data[44] In this way, our framework is promising for studying animal behaviors. In the Designing Sensory Configuration for Soft Robotic Arm section, we verified the minimum number of sensors sufficient for fully grasping the state dynamics by evaluating the information-processing capacity through the short-term memory task. We also demonstrated that the number of required sensors can be estimated without input information through the reconstruction of actual sensory dynamics. From the viewpoint of morphological computation, it is desirable to know the potential computational capacity of a body before deciding the sensor configuration.[59-62] However, an implanted sensor often worsens a soft material's deformability, which limits the number of attachable sensors. Our framework can be employed to estimate the optimal sensor positions of soft robots in a noninvasive manner, which is helpful in the design of soft robot. For example, by optimizing the sensory placements to maximize a measurement called effective dimension,[63] we can reduce the redundant sensors and efficiently capture the internal state of the soft body with a limited number of sensors. Another possible scenario is that our framework can be used to estimate the minimum dimension to model the deformation dynamics of the soft body. Since the redundant sensors are wiped out by the optimization, the number of obtained optimal sensors would be related to the required dimension for modeling the soft body. To sum, our framework offers a useful indicator in designing soft robot setups. Finally, we discuss the possible direction for extending our framework. The accuracy of our framework depends highly on the performance of the background subtraction algorithm. In this study, we prepared movies where the background and the object could be easily binarized by a single threshold value. It is, however, necessary to introduce advanced background subtraction algorithms such as Refs.[64,65] especially when the background has a complicated pattern such as in a natural environment. Moreover, our framework cannot extract the skeleton of a soft continuum body overlapping on the image, limiting the scalability of our framework in the control of soft robots. To solve the problem, a 3D volume video should be used instead of a two-dimensional video. In particular, the FMM algorithm, a core algorithm in our framework, can be easily extended to a 3D volume image, which should be developed in future work.
Algorithm 1 Backtracking in Skeletal Curve Extraction
1: Pt={ϕ} ⊳ point sequence on the skeletal curve
2: p=d(t)
3: while ps(t)>δ do
4:  PtPt{p}
5:  pp+δxTt(x) ⊳ approximating xTt with RK2
6: PtPt{s(t)}
  26 in total

1.  Towards a theoretical foundation for morphological computation with compliant bodies.

Authors:  Helmut Hauser; Auke J Ijspeert; Rudolf M Füchslin; Rolf Pfeifer; Wolfgang Maass
Journal:  Biol Cybern       Date:  2012-01-31       Impact factor: 2.086

2.  Exploiting the Dynamics of Soft Materials for Machine Learning.

Authors:  Kohei Nakajima; Helmut Hauser; Tao Li; Rolf Pfeifer
Journal:  Soft Robot       Date:  2018-04-30       Impact factor: 8.071

Review 3.  Design, fabrication and control of soft robots.

Authors:  Daniela Rus; Michael T Tolley
Journal:  Nature       Date:  2015-05-28       Impact factor: 49.962

Review 4.  Flexible Electronics toward Wearable Sensing.

Authors:  Wei Gao; Hiroki Ota; Daisuke Kiriya; Kuniharu Takei; Ali Javey
Journal:  Acc Chem Res       Date:  2019-02-15       Impact factor: 22.384

5.  Locomotion without a brain: physical reservoir computing in tensegrity structures.

Authors:  K Caluwaerts; M D'Haene; D Verstraeten; B Schrauwen
Journal:  Artif Life       Date:  2012-11-27       Impact factor: 0.667

Review 6.  Soft robotics: Technologies and systems pushing the boundaries of robot abilities.

Authors:  Cecilia Laschi; Barbara Mazzolai; Matteo Cianchetti
Journal:  Sci Robot       Date:  2016-11-16

7.  DeepLabCut: markerless pose estimation of user-defined body parts with deep learning.

Authors:  Alexander Mathis; Pranav Mamidanna; Kevin M Cury; Taiga Abe; Venkatesh N Murthy; Mackenzie Weygandt Mathis; Matthias Bethge
Journal:  Nat Neurosci       Date:  2018-08-20       Impact factor: 24.884

8.  OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields.

Authors:  Zhe Cao; Gines Hidalgo Martinez; Tomas Simon; Shih-En Wei; Yaser A Sheikh
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2019-07-17       Impact factor: 6.226

9.  Neuromuscular control of trout swimming in a vortex street: implications for energy economy during the Karman gait.

Authors:  James C Liao
Journal:  J Exp Biol       Date:  2004-09       Impact factor: 3.312

10.  Information processing via physical soft body.

Authors:  Kohei Nakajima; Helmut Hauser; Tao Li; Rolf Pfeifer
Journal:  Sci Rep       Date:  2015-05-27       Impact factor: 4.379

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.