Literature DB >> 35161819

Machine Learning Techniques for Increasing Efficiency of the Robot's Sensor and Control Information Processing.

Yuriy Kondratenko1, Igor Atamanyuk2,3, Ievgen Sidenko1, Galyna Kondratenko1, Stanislav Sichevskyi1.   

Abstract

Real-time systems are widely used in industry, including technological process control systems, industrial automation systems, SCADA systems, testing, and measuring equipment, and robotics. The efficiency of executing an intelligent robot's mission in many cases depends on the properties of the robot's sensor and control systems in providing the trajectory planning, recognition of the manipulated objects, adaptation of the desired clamping force of the gripper, obstacle avoidance, and so on. This paper provides an analysis of the approaches and methods for real-time sensor and control information processing with the application of machine learning, as well as successful cases of machine learning application in the synthesis of a robot's sensor and control systems. Among the robotic systems under investigation are (a) adaptive robots with slip displacement sensors and fuzzy logic implementation for sensor data processing, (b) magnetically controlled mobile robots for moving on inclined and ceiling surfaces with neuro-fuzzy observers and neuro controllers, and (c) robots that are functioning in unknown environments with the prediction of the control system state using statistical learning theory. All obtained results concern the main elements of the two-component robotic system with the mobile robot and adaptive manipulation robot on a fixed base for executing complex missions in non-stationary or uncertain conditions. The design and software implementation stage involves the creation of a structural diagram and description of the selected technologies, training a neural network for recognition and classification of geometric objects, and software implementation of control system components. The Swift programming language is used for the control system design and the CreateML framework is used for creating a neural network. Among the main results are: (a) expanding the capabilities of the intelligent control system by increasing the number of classes for recognition from three (cube, cylinder, and sphere) to five (cube, cylinder, sphere, pyramid, and cone); (b) increasing the validation accuracy (to 100%) for recognition of five different classes using CreateML (YOLOv2 architecture); (c) increasing the training accuracy (to 98.02%) and testing accuracy (to 98.0%) for recognition of five different classes using Torch library (ResNet34 architecture) in less time and number of epochs compared with Create ML (YOLOv2 architecture); (d) increasing the training accuracy (to 99.75%) and testing accuracy (to 99.2%) for recognition of five different classes using Torch library (ResNet34 architecture) and fine-tuning technology; and (e) analyzing the effect of dataset size impact on recognition accuracy with ResNet34 architecture and fine-tuning technology. The results can help to choose efficient (a) design approaches for control robotic devices, (b) machine-learning methods for performing pattern recognition and classification, and (c) computer technologies for designing control systems and simulating robotic devices.

Entities:  

Keywords:  canonical decomposition; classification; control system; fuzzy logic; machine learning; neural network; pattern recognition; real-time system; robotics; sensor

Mesh:

Year:  2022        PMID: 35161819      PMCID: PMC8839626          DOI: 10.3390/s22031062

Source DB:  PubMed          Journal:  Sensors (Basel)        ISSN: 1424-8220            Impact factor:   3.576


1. Introduction

With the development of technology, real-time systems have applications in various fields. Real-time systems are incredibly widely used in industry, including technological process control systems, industrial automation systems, SCADA systems, testing, and measuring equipment, and robotics [1,2,3,4]. Modern technology has increased interest in robotic systems and increased the amount of research conducted in this area. Many studies of such systems in several areas can make human life more manageable. For example, robotics has become an essential technology for the automation industry. Robots ensure maximum accuracy without human error when performing tasks [5]. Modern intellectual robots have high values of dynamic indicators and productively functions under certain modes. The task of robot control is complicated when they work in uncertain environments because robots usually lack full functionality. The supply of robots with efficient remote and tactile sensor systems provides significant functionality and technological capability [3,6]. Intellectual properties are essential for modern robots to gain experience and adapt to natural nonstationary working environments for executing various missions. Service robots acting in uncertain conditions have become even more widely used in recent years [7,8,9,10]. Many modern robots act in clinics, offices, supermarkets, cinemas, enterprises, etc. [11,12,13]. In order for robots to become part of a team and help with tasks in different situations, in particular in dynamic environments inhabited by people, they must move efficiently and without accident [14,15,16] in the target area. The efficiency of executing an intelligent robot’s mission in many cases depends on the properties of the robot’s sensor and control systems in providing the trajectory planning, recognition of the manipulated objects, adaptation of the desired clamping force of the gripper, obstacle avoidance, and so on (drones, unmanned underwater robots, etc.) [17,18,19,20,21]. To realize the efficient robot’s performance in real-time, particularly in unknown or uncertain environments, the stringent requirements of the robot’s sensor and control systems’ parameters (indicators) must be satisfied. First of all, it concerns: Increasing the accuracy of the sensor information of the tactile or remote sensors; The minimization of the time of sensor signal formation; Decreasing the time of the sensor and control information processing; Decreasing the time of the robot’s control system decision-making process in uncertain conditions or a dynamic working environment with obstacles; Extending the functional characteristics of the robots based on the implementation of efficient sensors and high-speed calculation algorithms. Artificial intelligence (AI) methods and algorithms are the perspective tool for designing a robot’s sensors and control systems with improved technical characteristics. Machine learning is a part of artificial intelligence. Machine learning (ML) algorithms build a model based on training data (sample data) for predictions and/or decision-making [22] without implementing traditional programming approaches. ML techniques are used in various applications, such as in image recognition, email filtering, speech recognition, human activity recognition, and computer vision, where it is difficult or unfeasible to develop conventional algorithms to perform the needed tasks [23]. Special attention must be paid to implementing various machine learning algorithms and approaches to robotics because robotics and artificial intelligence, including machine learning, increase and amplify human facilities, enhance productive capacity, and move from simple thinking to human cognitive skills. It opens new opportunities for increasing the efficiency of sensor information processing, recognizing the current situation in the robot working zone, controlling signal processing to realize the desired trajectories, automatic generation of alternative hypotheses, and decision-making in real-time. Among the most popular machine learning algorithm methods and approaches are neural nets, fuzzy sets, fuzzy logic, reinforcement learning, deep learning, semi-supervised learning, time series analysis, unsupervised learning, and regression analysis. The aim of this work is a development, investigation, and implementation of the different machine learning techniques, including fuzzy logic, neuro systems and networks, and combined neuro-fuzzy approaches and methods of statistical learning theory, for increasing the efficiency of sensor and control information processing in advanced multi-component robotic complexes. Such multi-component robotic complexes (MCRC) are the automatic two-robot systems of the special class that function in non-stationary, uncertain, or unknown working environments. MCRC consists of a moving mobile robot and adaptive robot with a fixed base (manipulator with adaptive gripper) as well as the sensor and control systems. The adaptive robot may be installed on the hull of the moving mobile robot. Mobile robots in MCRC serve as moving motherships for an adaptive robot with a fixed base and can deliver the adaptive robot to any target point of the working surface for executing the corresponding mission. The sensor system of MCRC may consist of various types of tactile sensors, video sensors, and different remote sensors depending on the MCRC missions. High accuracy of manipulation operations, high speed of sensor and control information processing, and high functioning reliability of the mobile and adaptive robots are the main requirements that can be satisfied by the implementation of modern machine learning techniques. The rest of the article covers multiple aspects related to the topic discussion. Section 2 deals with the analysis of published related works and formulation of the problem statement. Section 3 covers a general representation of the proposed fuzzy information processing technique for the adaptive robot’s sensor system detecting the slip displacement signal and recognizing the direction of the unknown object slippage. Section 4 presents a neuro-fuzzy observer of clamping force and a neuro-controller for the control system of the mobile robot, which can move on inclined and vertical ferromagnetic surfaces. In Section 5, the authors provide a detailed description of the prediction procedure for providing reliable functioning MCRS, including robot sensor and control systems, based on the canonical decomposition of the statistical data. Section 6 and Section 7 deal with the implementation of the open-source software for designing the adaptive robot’s control system with the training of a convolution neural network for the recognition of the object shape in the working zone of the robot based on video-sensor information. The paper ends with a conclusion in Section 8.

2. Related Works and Problem Statement

In recent years, the role of machine learning has significantly increased. Machine learning techniques have many successful applications in different areas of human activity, in particular, in medicine [4,24,25,26,27,28,29,30,31], agriculture [32,33,34,35,36], transportation [37,38,39,40,41,42], energy production [43,44], finance markets [45], investment policy [46] and research [47]. Statistical learning theory is efficiently used for processing data from sensors in real-time based on effective multi-output Gaussian processes [48] and for prognosis of the state of technical objects using canonical decomposition of a random sequence [49]. Let us analyze the peculiarities of ML techniques’ implementation in robotics as this article is devoted to the increasing efficiency of robot’s sensor and control information processing using appropriate machine learning algorithms. Machine learning techniques are successfully implemented for robot control with model-based reinforcement learning [50], for convergence machine learning methods and robotics in co-assembly [51], for intelligent and autonomous surgical robotics [28], for speed control when creating robots of the type “leader-follower” with the application of fuzzy sets and fuzzy logic, and supervised machine learning [52], for computer-aided design based on machine learning for space research and control of autonomous aerial robots [53], and for robotics and automation using simulation-driven machine learning [54].

2.1. Machine Learning Techniques for Robotics in Industrial Automation

Rajawat et al., in [55], introduce a newer approach to process automation using robotic devices, which increases efficiency and product quality with the application of artificial intelligence and machine learning techniques by the implementation of control and repetitiveness of robotics with human flexibility and functionality. The paper [56] discusses an application of the ML methods (random forest, artificial neural network) to accurately model surface roughness in wire arc additive manufacturing. In [57], Wang et al., especially for the robotic assembly system, propose an image processing method based on machine learning algorithms. Mayr et al. discuss the optimizing linear winding process in electric motor manufacturing based on machine learning techniques and sensor integration [58]. Al-Mousawi, in [59], synthesizes the detection system for magnetic explosives based on machine learning techniques and a wireless sensor network. Martins et al., in [60], consider the application of the machine learning techniques for cognitive robotic process automation. Segreto et al., propose to use different machine learning techniques for in-process end-point detection in robot-assisted polishing using multiple sensor monitoring [61].

2.2. Machine Learning in Robot Path Planning and Control

The approach (with application to mobile robots), presented in [62], uses machine learning techniques to improve the connection between low-level and high-level representations of sensing and planning, respectively. Qijie et al., propose [21] a planning path algorithm for mobile robots in unknown and uncertain environments based on rapidly exploring random trees and reinforcement learning SARSA (λ). The article [63] concerns exoskeleton robot applications and presents various data modes as input parameters to models of machine learning to increase the timeliness, motion accuracy, and safety of gait planning. In [64], the authors provide a review of deep learning and robotic capture, as well as tracking and gait planning problems. The basketball-training robot provides intelligent path autonomous planning and approaches the target point by avoiding obstacles [65]. The robot path planning approach is described in [20] with the implementation of the deep reinforcement learning method.

2.3. Machine Learning for Information Processing in Robot Tactile and Remote Sensors

The review in [66] researches the combination of electronic skins and machine learning techniques. The authors demonstrate how researchers can use the latest developments from the above two areas to create autonomous robots with deployable functions. They were integrated with informative sensory and proprioceptive capabilities to face complex conditions in real situations. Ibrahim et al. present embedded machine learning methods [67] for near sensors tactile data processing. Keser et al. use ML techniques for surface roughness recognition based on fiber optic tactile sensor data [68]. In [69], ML regression algorithms are trained based on proprioceptive sensing for predicting slippage of individual wheels in off-road mobile robots. Wei et al. propose a fusion method with the application of support vector machine and evidence theory for robot target detection and recognition using multi-sensor information processing [70]. Martinez-Hernandez et al. use a tactile robot for autonomous and adaptive exploration of object shape using learning from sensory predictions [71]. A smart capacitive sensor skin with embedded data quality indication for enhanced safety in human–robot interaction is proposed in [72] with the implementation of two ML algorithms, in particular, a neural network and a support vector machine.

2.4. Machine Learning in Robot Computer Vision

Joshi et al., in [73], demonstrate a method based on deep reinforcement learning to solve a robotic gripping problem using visio-motor feedback. A posture assessment system based on the “eye-to-hand” camera has been developed in [74] for robotic machining, and the accuracy of the estimated pose is improved using two different approaches, namely sparse regression, and LSTM neural networks. Inoue et al. propose [19] a machine vision approach for robots with autonomous navigation based on a stereo camera and convolutional neural networks (deep learning technique) for the avoidance of obstacles. Mishra et al. consider robotic vision solutions for pedestrian detection in the working zone of mobile robots based on deep learning techniques [75].

2.5. Machine Learning for Increasing Reliability and Fault Diagnostics

To increase the productivity of automated industrial processes, it is necessary to monitor and estimate the current state of the robots and manufacturing equipment. Long et al. use attitude data for intelligent fault diagnosis of multi-joint industrial robots [76] based on a deep hybrid learning structure (sparse auto-encoder and support vector machine). Subha et al. consider the problem of sensor fault diagnostics for autonomous underwater robots using an extreme learning machine [77]. Severo de Souza, in [78], considers increasing the reliability of the production system by detecting abnormal sensors based on machine learning techniques and information from the wireless sensor network. In [2], a predictive maintenance system for production lines in manufacturing can detect signals for potential failures before they occur based on the real-time application of the IoT data and machine learning techniques. Kamizono et al. in [79] propose a fault detection and classification approach based on a neural network with a harmonic sensor for preventing a robotic error. The above-discussed analysis of last publications on the implementation of machine learning in robotics shows that researchers continue to improve machine learning techniques [47,80,81] and develop new machine learning solutions [40,82,83,84,85] for intelligent robots using well-known and new design methods, approaches, and methodologies. It deals, first, with the new and specific missions of the ground, underwater, and aerial mobile robots, as well as with the high level of information uncertainty concerning the nature of the robot environment, changing character of manipulated object parameters, and unknown behavior of the dynamic obstacles. Additional design requirements also stimulate research into the development of new approaches for the implementation of machine learning methods and algorithms in modern robotics. Thus, developing new design methods, algorithms, and models is reasonably necessary to provide efficient sensor and control information processing. It will simultaneously improve the design processes for robot navigation-control systems and increase the control indexes of their functioning in uncertain environments. The problem statement of this article deals with the implementation and investigation of advanced ML techniques based on the fuzzy sets theory, theory of neuro system, statistical learning theory, and others, for the increasing efficiency of robot sensor and control information processing in multi-component robotic complexes that function in non-stationary, uncertain, or unknown working environments. The missions of the considered MCRC may deal with the preliminary unknown or changeable mass of the manipulated objects and with the need to avoid obstacles or to correct the adaptive robot’s path in collisions with obstacles [3,6,7,86,87,88,89,90,91,92,93]. In this case, the adaptive robot with a fixed base should have the possibility to identify object mass and directions of the manipulated object slippage. Such adaptive robots may be equipped with different tactile sensors and can work separately or may be installed at the moving mothership mobile robot as the second robotic component of MCMC. For mobile robots which can move on inclined, vertical, or ceiling ferromagnetic surfaces (ship hulls, for example), it is very important to ensure high indicators of control and maneuvering characteristics and to provide the required magnetic clamping force between the mobile robot and working surface [13,15,17,94,95,96,97,98]. The mobile robot should track the working surface parameters and create a reliable value of the clamping force for high-reliable MCRC functioning in the action of surface disturbances, taking into account that a ferromagnetic surface may be covered by nonmagnetic layers of dead microorganisms. In these cases, the size of the gap may have a non-stationary character. It is necessary also, to keep and support MCRC’s high functionality [34,49,84] by controlling and predicting the technical state of all MCRC components. With the regular operation of MCRC, it is necessary to evaluate in real-time the operability of the corresponding devices for sensor and control information processing, to ensure their continuous operation and predict possible failures. For many important MCRC missions, real-time object recognition can be provided by using the video camera on the manipulator’s arm. In these cases, the recognized objects should be classified, and the images should be transmitted to the control panel, for example, a mobile phone. Creating the ML models of neural networks with YOLOv2 and ResNet34 architectures is a prospective approach for the recognition and classification of the different objects in images. For the development of the optimal structure of the MCRC’s control system during design processes, it is necessary to implement a simulation approach based on a MoveIt environment that allows the obtaining of configuration files of the manipulator’s arm and transferring them to the MCRC’s control system. Finally, let us formulate the aims of this research as: Implementing the machine learning algorithms for extension of functional features of adaptive robots; in particular, using fuzzy and neuro net approaches for sensor information processing within the recognition of the slippage direction of manipulated objects in the robot gripper during its contact with the obstacles; Approximating the “clamping force—air gap” nonstationary functional dependence based on a neuro-fuzzy technique for the mobile robot control system, which provides increased reliability for robot movement on inclined electromagnetic surfaces; Implementing the statistical learning theory for increasing the efficiency of a robot’s sensor system based on the developed algorithms of prediction control; Developing the machine learning models and corresponding software for recognizing manipulated objects [99] using video–sensor information processing with a discussion of the peculiarities of the convolutional-neural network’s training process.

3. The Machine Learning Algorithms for Extension of Functional Properties of Adaptive Robots with Slip Displacement Sensors

One of the efficient approaches to determine the unknown mass of a manipulated object and the desired value of clamping force is the use of tactile sensors that provide detection of the object slippage between the gripper fingers [3,6,12,86,87,88,89,90,91,92,93,100,101,102]. Besides this, slip displacement information can be used to recognize objects from a set of alternatives and for correction of the control algorithm and robot gripper’s trajectory. The design process for the slip displacement sensors (SDS) is based on the implementation of different detection methods [89,92,103], including rolling motion, vibration, or changing a configuration of sensitive elements, the friction registration, oscillation of the circuit parameters, displacement of the fixed sensitive elements, and others. Let us discuss the task of tactile sensor information processing based on machine learning algorithms to recognize the object slippage and slippage direction (in the gripper of the adaptive robot). Most SDS form the sensor information for the robot control system only about slippage as an event (1—true, 0—false). An additional problem is identifying slippage direction, which is very important for situations when a robot gripper contacts an unknown obstacle in the robot working zone [89,103,104]. In many cases, the appearance of the obstacles has random character. In collision situations, the slippage direction depends on the robot’s gripper trajectory and coordinates of the obstacles in the adaptive robot’s working space. It is possible to acquire information about slippage direction using (a) multi-component slip displacement sensors and (b) the machine learning algorithms (fuzzy logic, neuro networks) for sensor information processing. Multi-component slip displacement sensors can detect the sensitive rod displacement in the special cavity using a group of Hall sensors [90], capacitive sensors [89,102,103], or resistance sensors based on the electro-conductive rubber [88]. Let us consider a fuzzy logic approach for identifying slippage direction based on the capacitive slip displacement sensor [89,103], presented in Figure 1.
Figure 1

Capacitive slip displacement sensor with two-part sensitive rod: (a) front view; (b) right view: 1, 2—first and second cavities of the robot’s finger; 3—the robot’s finger; 4—two-part rod (sensitive element); 5—elastic tip; 6—elastic contact surface; 7—spring; 8—resilient element; 9, 10—multi-component capacitor plates.

The SDS is placed on at least one of the gripper fingers (Figure 1). The recording element consists of four capacitors distributed across the conical surface of the special cavity (2). One plate (9) of each capacitor is located on the rod (4) surface, and the second plate (10) on the inner surface of the cavity (2). The sensitive element 4 will move to the direction {N, NE, E, SE, S, SW, W, NW} and intermediate positions, for example, from point O to point P, depending on the corresponding direction of the object slippage in the robot gripper in cases of contacting obstacles (Figure 2).
Figure 2

Sensor for recognition of slippage direction: 1, 2, 3, 4—capacitors C1, C2, C3, and C4; 5—deviating rod; point O—initial position of the sensitive rod before object slippage; point P—final position of the sensitive rod after object slippage; N (0°/360°), NE (45°), E (90°), SE (135°), S (180°), SW (225°), W (270°), NW (315°)—directions of the object slippage; OXYZ—coordinates system for the robot’s working space; OX—direction for creating clamping force between gripper fingers; —slippage direction.

Reciprocal movements of plates (9) and (10) in all capacitive elements lead to value changes of the capacities C1, C2, C3, and C4, depending on the direction of the rod’s movement. Using some experimental data for different displacement directions and corresponding changes of the capacities C1, C2, C3, and C4, it is possible to build and learn a corresponding neural network with four input signals (C1, C2, C3, C4) and one output signal (direction α within the interval [0°, 360°], fuzzy rules. Another machine learning approach based on the fuzzy logic implementation [38,103,105] can be realized based on the set of adjustments of the rule consequents using the above mentioned experimental data or simulation results. The structure of the fuzzy system (FS) of the Mamdani type [106] for slip displacement sensor information processing is presented in Figure 3 and a fragment of fuzzy rule base is presented in Figure 4.
Figure 3

The structure of the fuzzy system with 3 linguistic terms for input signals and 9 linguistic terms for the output signal.

Figure 4

The fragment of the fuzzy rule base for the identification of the slippage direction.

Each fuzzy rule in the fuzzy rule base has the following structure:IF (Condition–Antecedent), THEN (Result–Consequent), which deals with the determination of the dependence: between slip displacement direction α and capacitive values C1, C2, C3, C4. C1, C2, C3, and C4 are output signals of corresponding capacitor sensitive components, and simultaneously input signals for the designed fuzzy system. The characterized surfaces of the designed fuzzy system of the Mamdani type for slip displacement sensor information processing are presented in Figure 5.
Figure 5

Characteristic surfaces of the fuzzy system : (a) C1 = const; C2 = const; (b) C2 = const; C3 = const.

Simulation results show that the designed fuzzy system provides efficient sensor information processing and can calculate the slippage direction for any natural combinations of the measured capacitive parameters C1, C2, C3, C4, which correspond to the displacement of the sensitive element (rod 4 in Figure 1). The application of the fuzzy logic method of sensor information processing allows the expanding of the functional properties of the adaptive robot with the possibility to correct the trajectory for obstacle avoidance. It is possible to improve the quality of the designed fuzzy system and increase the efficiency of sensor information processing by using different structural and parametric optimization methods for fuzzy systems [107,108,109,110,111,112]. The novelty of the presented results consists of (a) the developed structure and intelligent rule base of the fuzzy system for sensor information processing during the slip displacement detection and recognition of the object’s slippage direction within the interval of 0–360 degrees as well as (b) developed ML algorithm and information communications of the proposed fuzzy system and the original capacitive multi-component slip displacement sensor. Patent of Ukraine (Patent No. 52080) defends the engineering solution of the considered robot’s slip displacement sensor (Figure 1).

4. Neuro-Fuzzy Techniques in Control Systems of Mobile Robots That Can Move the Operation Tool on Inclined, Vertical, and Ceiling Ferromagnetic Surfaces

In the modern world, there are particular needs for mobile robots (MR) to move on different horizontal, inclined, or vertical surfaces. MR can use various types of propelling and pressure devices, for example, for cleaning the exterior parts of ships and other constructures afloat or in a dry dock when automating processes in shipbuilding [98,113,114,115]. Such robots can perform various complex, resource-intensive, and hazardous to people’s lives and health, works. For example, cleaning large vertical surfaces and hard-to-reach places, decontamination under radiation conditions, installation of dowels and explosive devices, firefighting, painting, inspection, diagnostics, etc. An important element in the control of such robots is to ensure reliable adhesion (gripping) of the MR to the surface and its retention without slipping when performing various tasks [98]. Clamping devices with magnetic fastening provide effective grip to the surface using electromagnets. Mobile robots, presented in Figure 6 and Figure 7, improve the performance and reliability of technological operations on ferromagnetic surfaces [15,17,98,116,117].
Figure 6

Two intermediate states of the magnet-controlled wheel of mobile robots with different positions of the stepping legs: the movement on the ceiling (left) and vertical (right) electro-conductive surfaces.

Figure 7

Multipurpose caterpillar MR: 1—main clamping magnet; 2—ferromagnetic surface; 3—spherical joint 3; 4—frame; 5—right and left tracks; δ—clearance.

Machine learning methods based on the adaptive neuro-fuzzy inference engine can be successfully used for the synthesis of the clamping force observer [17,98] for MR, presented in Figure 6. The adaptive network-based fuzzy inference system (ANFIS) is an artificial neural network based on a fuzzy inference system (FIS) [98]. ANFIS presents in the form of the neural network with five layers and the forward signal propagation (Figure 8). Each node at the first ANFIS layer corresponds to the linguistic term (LT) of a certain input signal. Thus, the total number of nodes is equal to the sum of all LTs for all input signals. ANFIS in Figure 8 has two LTs (Small, Large) and three LTs (Small, Middle, Large) for first and second input signals, and a total of five nodes at the first ANFIS layer. The second, third, and fourth layers of the ANFIS consist of the six nodes according to the number of fuzzy rules, degrees of antecedent realization, and contributions of the corresponding fuzzy rules. Such contributions are summarized in one node of the fifth ANFIS layer to form a resulting output signal.
Figure 8

Functional structure of typical ANFIS with two inputs x1, x2, and one output y.

ANFIS allows improving sensor information processing based on the measurement of the gap between the robot’s clamping magnet and non-stationary ferromagnetic surface covered by non-ferromagnetic components. The shape of linguistic terms’ membership functions of the input variables significantly affects the training process of the ANFIS and clamping force observer accuracy. The comparative results demonstrate the high accuracy of the desired clamping force calculation by the developed fuzzy-neuro observer. In particular, the training error 0.187 of ANFIS with Gaussian 2 membership functions for the linguistic terms of its input signals is less in 1.23 and 2.75 times, compared with using π-like and trapezoidal membership functions, correspondingly. The ANFIS characteristic surface for the input variable linguistic terms with Gaussian 2 membership functions is presented in Figure 9.
Figure 9

Characteristic surface based on Gaussian-2 membership functions.

Successful cases for improving control information processing deal with the implementation of fuzzy and neuro controllers for mobile robot control in uncertain or unknown environments. For example, investigation of the machine learning algorithms [116,117] for mobile caterpillar robots (Figure 7) demonstrated the high efficiency of the neural controllers’ implementation for control of mobile robot movement on the desired trajectory. The synthesis procedure of the designed controllers is developed using a genetic algorithm. The fitness function is based on the control quality of two output signals, in particular, speed and angle. The comparative results demonstrate the high efficiency of the proposed machine learning technique, in particular, transient times for the MR control system with neuro controllers compared with conventional PID-controllers (with optimal parameters) are decreased by 2.2 times for the MR’s speed control channel and by 1.23 times for the MR’s angle control channel. The novelty of the presented results deals with the application of machine learning techniques for original constructions of the magnetically wheel-controlled mobile robot (Figure 6) with the neuro-fuzzy observer of clamping force, and caterpillar mobile robot (Figure 7) with neuro controller for improving sensor and control information processing. Patents of Ukraine (Patent No. 45369, Patent No. 47369, Patent No. 100341) defend considered engineering solutions for mobile robots.

5. Prediction Control of Robot Sensor and Control Systems Based on the Canonical Decomposition of the Statistical Data

The reason for sensor system errors is due to the natural wear of the sensors, as well as the peculiarities of their functioning [118,119,120,121,122]: Critical operating conditions (high/low temperature, humidity, pressure, pollution, illumination, etc.); The autonomy of work; Changing the mutual orientation of the sensor and the recognition object; Work in real-time (almost always); Limited resources. Errors in sensor systems significantly reduce the quality of operation of robot control systems, which can lead to catastrophic consequences (at critical infrastructure facilities). In this regard, the estimation of the operability of the control system in real-time [4,123] is an important and urgent task, taking into account the peculiarities of changing the operating conditions and their investigations based on machine learning. To ensure high reliability of the operation of control systems, it is proposed to use the predictive control [124,125] module in the general structure—an estimation of the system’s state at future points in time, followed by a decision on the suitability of its use. In the general case, the parameter characterizing the quality of the system’s functioning (time of operation, the number of operations per unit of time, the accuracy of the operation, etc.) is random. Therefore, to solve the problem of estimation of the system’s future state, it is necessary to use the methods of the theory of random functions and random sequences. The canonical decomposition of a random sequence of the changeable parameter at the moments in time is the most universal (from the point of view of restrictions) mathematical model [120]: where is a mathematical expectation. The elements of the canonical expansion are determined by the recurrent relations [120,124]: The coordinate functions are characterized by the properties: Nonlinear model (1) of the random sequence contains arrays of uncorrelated centered random coefficients Each of these coefficients contains information about the corresponding value , and the coordinate functions describe probabilistic relations of the order between the sections and Sequential substitution of values into an expression (1) with the subsequent application of the mathematical expectation operation allows one to obtain an estimate of the investigated parameter at future points in time : where is an optimal estimate of the future value according to the minimum mean square of the extrapolation error criterion. The predictive model (5) can be converted to: where: Expression for the mean square of the forecast error is written as: The operation of predictive control consists of checking whether the estimates of the investigated parameter belong to the interval of admissible values: If condition (12) is not met, a failure is recorded, and a decision to restore the system is made. The conditions or can also be the criteria for the quality of the functioning of the control system. The training of the mathematical model and the extrapolator during system operation is carried out based on the formulas: where are estimates of the mathematical expectation and variance of random coefficients based on the existing statistical database () for the systems of this class; are refined parameters of the models (3) and (7) using additional statistical data on the results of the system’s functioning under study. The proposed extrapolation method was tested on the model of random sequence: In the first three sections, random sequences are uniformly distributed on the segment and is a uniformly distributed random variable on the segment . By the results of 100 extrapolation experiments based on the linear algorithm, the Kalman filter of the order 4 and proposed nonlinear algorithm (of the order 4 based on all previous values ), the estimates of the standard deviation (SD) were obtained (Figure 10).
Figure 10

Forecast error standard deviation of the random sequence realizations for various extrapolation algorithms: comparative results.

Analysis of the standard deviation of the forecast error (Figure 10) indicates a high forecast accuracy when using the nonlinear methods (5) and (6) (curve “non-linear forecast” in Figure 10), in which the stochastic properties of the investigated random sequences are taken into account as much as possible (nonlinearity, use of the full amount of a posteriori information, non-stationarity). The extrapolation accuracy is 3–3.4 times higher compared to the Wiener–Hopf method [126] (“linear forecast” curve in Figure 10) due to the use of non-linear relationships, and 1.5–2.4 times higher compared to the Kalman method [126] due to the use of a larger volume of a posteriori information. The diagram in Figure 11 reflects the features of the functioning of the predictive control module.
Figure 11

Diagram of the predictive control module functioning.

At the time of putting the system into operation, the estimates of future values are calculated using the Equations (5), (6), (9), and (10), and only based on information about the functioning of systems of this class. In further operations, the machine learning of the model and the extrapolator is performed based on statistical data about the system under study using the Equations (13)–(15). The module can function in real-time, considering that the initial parameters (5), (6), (9) and (10) of the forecast algorithm can be calculated in advance before the start of the system operation, and the formulae for training (13)–(15) and extrapolation (7) and (8) are computationally simple. A significant advantage of the proposed method for assessing the operability of a control system is the prevention of failures and, as a result, ensuring its continued operation in future moments. The advantage of the method is also taking into account the individual characteristics of the control system: the accumulation of a priori information about the studied system in the process of operation assessment of operability based on current measurements of the state of the system in real-time. The forecasting used an algorithm that, unlike the known methods (Wiener–Hopf method, Kolmogorov polynomial, Kalman filter, etc.), does not impose any restrictions on the random sequence of changes in system parameters (linearity, monotonicity, ergodicity, stationarity, Markov properties, etc.), which makes it possible to achieve the maximum accuracy of solving the problem of predictive operability monitoring. Patent of Ukraine (Patent No. 73855) defends, and the engineering solution of the considered method is for prediction of the object’s technical state.

6. Control System Design and Robot Arm Simulation

The development of an effective control system is a prerequisite for quality machine learning, accurate object recognition, and image classification. Machine learning and intelligent control methods are an important part of designing a control system and arm modeling. It can be used for sorting and orienting objects on the conveyor, remote control of dangerous and/or harmful objects, automatic recognition of objects based on trained models using machine learning technologies, and computer vision [24,27,42,105]. In general, the weak sides of analog products are forced to carry out software implementation [1,127]. Before software implementation, it is necessary to design a control system and simulate a manipulator’s arm. Figure 12 shows an illustrative structural diagram of the control system. It shows a list of functions and properties of these services of the control system. These services will not be assigned to the management system itself and will be used only in the right places within the system. For example, the network service may be needed when viewing the log of all actions performed by the robot. Let us consider in more detail the relevant components.
Figure 12

An extended structural diagram of the control system.

6.1. “Control System” Component

General functions can be presented as: communication with a robotic device using a Bluetooth connection; selection of the system operation mode (testing the model, working with device); intelligent device control using a trained neural network; storing information about the state of the device and the current progress of the task; synchronization of intermediate data with the server upon completion of the task by the device; and communication with the server using the REST (Representational State Transfer) API (Application Programming Interface). The list of internal dependencies and additional services consists of a BLE (Bluetooth Low Energy) service for connecting with a robotic device; a service for saving local data; a network service to interact with the server; a communication service to interact with a robotic device; and a data processing service with an artificial intelligence module. The artificial intelligence module is presented as a neural network for pattern recognition and classification. The architecture of this neural network is a convolutional neural network with the YOLOv2 (You Only Look Once) structure. It includes sequential convolution layers with the function ReLU (Rectified Linear Unit), layers pooling for feature map definition, and a fully connected neural network for classification [127,128,129,130].

6.2. “Server” Component

General functions are communication with the control system using the REST API, saving history about all devices’ operating states, and intermediate data. The list of internal dependencies are “Fluent ORM” for converting custom queries in the Swift programming language into raw SQL (Structured Query Language); “SQLite” is a relational DBMS (Database Management System) for data storage; and REST API with commands (GET, POST) [131].

6.3. “Manipulator Arm” Component

General functions are communication with the control system using a Bluetooth connection; recording an image of the environment using a built-in camera and transmitting the image to the system; getting coordinates of the object from the system; and transforming the coordinates obtained from the system into coordinates of the environment. The “Manipulator arm” component is presented as a simulated manipulator arm using the MoveIt software and the ROS (Robot Operating System) real-time operating system. It has five degrees of freedom and a gripper at the end [132,133,134,135]. MoveIt (Figure 13) is an open-source manipulation software developed by Willow Garageby, Ioan A. Sucan, and Sachin Chitta [133,134,135]. The software offers solutions for mobile manipulation problems, such as kinematics, planning and motion control, 3D perception, and navigation. The MoveIt library is part of the ROS package and is widely used in robotic systems. MoveIt is great for developers because it can be easily customized for any job [133,134,135].
Figure 13

Motion planning with MoveIt software.

For designing a control system using MoveIt software, it is necessary to acquire the URDF (Universal Robot Description Format) file of the device (in our case, it is the URDF of the manipulator’s arm). This file contains elemental arm composition, length, and joints. MoveIt comes with MoveIt Assistant that helps to configure and simplify things to a great extent [133]. Based on the batch files with the robot configurations and the MoveIt Rviz (Figure 13) configuration environment, it is possible to perform operations (movement planning, manipulation, and others) on a robotic device [133,134,135]. The novelty of the obtained results is (a) the development of the original structure of the control system with the implementation of interaction protocols of system components and the integration of an artificial intelligence module for the recognition and classification of objects for manipulator arm missions; and (b) modeling of the developed control system in the MoveIt environment with adjustment of the corresponding parameters for different manipulator arm missions.

7. Object Recognition in Robot Working Space Using Convolutional Neural Network

When designing an intelligent control module, it is necessary to find sets of images that will be used to perform the neural network’s learning process. Intelligent control is carried out by one of the methods of machine learning, namely pattern recognition and classification, using artificial neural networks [1,3,24,25,26,27,42,99]. The robotic device sends the query to gather an image from the environment, the control system processes the obtained image and sends the processing results to the robotic device. The result of this processing is the fact that the object belongs to one of the classes and its coordinates, relative to the resulting image. In our case, it is necessary to find three datasets of images for each class: cube, cylinder, sphere (partial image datasets in Figure 14).
Figure 14

Fragments of the datasets for: (a) cube; (b) cylinder; (c) sphere.

Images with several classes were also found, for example, with three cubes and two cylinders. In this case, images with one class could be with one or more objects, such as one or five spheres. The resulting total dataset contains 213 images, including 70 images of the class “Cube”, 70 images of the class “Cylinder”, 71 images of the class “Sphere”, and 2 images with several different classes [99]. Since the training of neural networks for such a class of tasks takes place with the teacher, it is necessary to add annotations for the resulting total dataset, including the coordinates of the object belonging to the class. Using the resource https://cloud.annotations.ai/ (accessed on 8 February 2021), authors create a project, transfer the prepared sets of images, and add annotations for each class. Figure 15 shows the process of adding annotations to images.
Figure 15

The process of adding annotations to images.

Figure 15 shows that one image contains 10 objects of the same class “Cube”, in our case, 10 annotations were added for 10 cubes. Annotations are a rectangle where there is an object of a certain class, with coordinates (X, Y, width, height). After adding annotations for all data, the export of the data is prepared in the format of Create ML [99,128]. The paper uses a convolutional neural network with YOLOv2 architecture. To train this type of network, a set of images and an annotation in .json format is necessary, with detailed information about the location of the class object (coordinates in 2D space) and the class itself. The following parameters were chosen for the neural network using Create ML [128,129,130,136] (Figure 16): the algorithm is the complete network (trains a complete object detection network based on YOLOv2 architecture); the number of epochs is 5000; the batch size is automatic; and the grid size is 13 × 13.
Figure 16

The training process in Create ML.

Additional information about learning (training) outcomes is presented in Table 1.
Table 1

Training outcomes using Create ML.

Number of EpochsLossTimein SecondsTraining Accuracyin %Validation Accuracyin %
10002.12318608782
20001.21136609590
30001.04453409593
40000.85770209993
50000.752876010095
The neural network model has achieved a training accuracy of 100% in 5000 epochs. The model was created, trained, and tested on a 2016 MacBook Air with a 1.6 GHz i5 CPU. To check the results of training the neural network, we prepared a set of images that contain objects belonging to the specified classes and were not in the set of images for training. The testing (recognition) results (Figure 17) show that the objects (10 cubes) in the image were recognized with 99.8% accuracy (accuracy of recognition of one of the cubes was 98%, the others were 100%). However, it should be noted that in some cases, when the image is distorted, the recognition accuracy is reduced. A total of 50 images with different classes and number of objects were tested, and the overall accuracy was 99%.
Figure 17

Testing (recognition) results of the object class “Cube”.

In the developed software implementation, the communication between the client and the server is based on the HTTP (HyperText Transfer Protocol) protocol. With the help of HTTP requests (GET, POST, PUT, DELETE), communication takes place. The client sends a request to one of the available endpoints, the server accepts the request, processes the data, and returns a response [99]. To expand the capabilities of the intelligent control system, the authors have increased the number of classes for recognition from three (“Cube”, “Sphere”, “Cylinder”) to five (“Cube”, “Sphere”, “Cylinder”, “Cone”, and “Pyramid”). Now, the resulting total dataset contains 225 images, including 52 images of the class “Sphere”, 45 images of the class “Cone”, 44 images of the class “Cube”, 42 images of the class “Pyramid”, and 42 images of the class “Cylinder”. The input images had a resolution of 416 by 416 pixels. Additional information about learning (training) outcomes for five classes of recognition is presented in Table 2.
Table 2

Training outcomes for recognition of five classes using Create ML.

Number of EpochsLossTimein SecondsTraining Accuracyin %Validation Accuracyin %
4501.77808580
5101.79008681
10001.218008782
20000.9535409590
30000.7352209593
40000.6869609996
50000.638700100100
In this case, the model was created, trained, and tested on a more powerful 2017 MacBook Air with a 1.8 GHz i5 CPU. The neural network model achieved a training accuracy of 100% in 5000 epochs with less loss and time according to data in Table 1. In addition, the accuracy of recognition (classification) is very high. For example, the same objects in different numbers that belong to the same class, the network recognizes with 100% accuracy (Figure 18a,b). Recognition accuracy decreases if several objects of different classes are in the same image (Figure 18c). For example, the “Cylinder” object (Figure 18c) belongs to the following classes with varying accuracy: the class “Cylinder” with 91%, the class “Sphere” with 4%, the class “Cube” with 3%, the classes “Pyramid” and “Cone” with 1%. A similar situation with the “Cone” object refers to the following classes with varying accuracy: the class “Cone” with 97%, and the class “Pyramid” with 3%. However, still displays a high accuracy of recognition (classification).
Figure 18

Testing (recognition) results: (a) class “Pyramid”; (b) class “Sphere”; (c) several objects of different classes.

A similar dataset (https://data.wielgosz.info, accessed on 10 Decemebr 2021) with five geometric figures (cone, cube, cylinder, sphere, torus) was chosen to study the influence of sample size, neural network architecture, and learning parameters on recognition accuracy. The dataset has training and test samples. The size of the training sample is 40,000 images (8000 images of each figure). The size of the testing sample is 10,000 images (2000 images of each object). Different transformations were applied to each sample. Both training and test images were normalized and cropped to 224 by 224 pixels. In addition, for greater accuracy, additional transformations were applied to the training sample, including random rotations and shifts of the image center. The authors chose the ResNet34 architecture. The Torch library for Python was used to work with it. The neural network trained at Google Colab was on the GPU [26,27]. The ResNet34 model was downloaded for training, and the number of outputs was changed from 1000 (default) to 5 (each output is a figure class). In the third epoch, the testing accuracy decreased (overfitting/overtraining took place), so the process was stopped. The best result in the second epoch was 98%. Additional information about training and testing outcomes for five classes for recognition is presented in Table 3.
Table 3

Training and testing outcomes for recognition of 5 classes by Torch library (Python).

Number of EpochsTraining LossTesting LossTraining Accuracyin %Testing Accuracyin %
10.11870.085796.2197.07
20.05990.061198.0298.00
30.05150.066198.2897.87
Using fine-tuning technology, the authors increased the testing accuracy to 99.2%. Additional information about training and testing outcomes for five classes of recognition using fine-tuning technology is presented in Table 4.
Table 4

Training and testing outcomes for recognition of 5 classes using fine-tuning technology by the Torch library (Python).

Number of EpochsTraining LossTesting LossTraining Accuracyin %Testing Accuracyin %
10.04690.034198.3898.68
20.00920.058299.6797.68
30.00600.026399.7599.20
40.00580.067799.7697.16
The results (Table 4) show that after the third epoch there is overtraining. So, let us focus on the third epoch: the training accuracy is 99.75% and the testing accuracy is 99.2%. The obtained numerical results, compared to other similar studies [68] in which a K-nearest neighbor algorithm and a support vector machine algorithm were used for recognition and classification, in this case, the values of accuracy were 84.2% and 81.6%, respectively. This proves the advantage of neural networks in such problems. The authors also investigated the effect of dataset size on recognition accuracy. For the study, the number of images of each class in the training and test datasets was reduced to 200 images. Thus, 1000 training images and 1000 test images remained. The results of the use of the neural network with ResNet34 architecture and fine-tuning technology are presented in Table 5.
Table 5

Training and testing outcomes for recognition of five classes (1000 training images and 1000 test images) using fine-tuning technology by the Torch library (Python).

Number of EpochsTraining LossTesting LossTraining Accuracyin %Testing Accuracyin %
10.61280.341277.8088.40
20.16230.190294.8093.60
30.13300.207095.3090.90
40.10110.124096.7096.80
50.03020.060199.1098.00
The results (Table 5) show that the testing accuracy gradually increases from 88.4% to 98%. However, the neural network trains more slowly than on a full dataset. The sample size has a significant impact on the quality of training and the number of epochs. The Vapor web framework was chosen to implement the server part. According to the general list of server functions, communication with the client part is necessary. This can be performed using the HTTP protocol. Vapor Toolbox is required when working with Vapor. To download and install this set, run the following command (brew tap vapor/tap && brew install vapor/tap/vapor) in the Terminal window. After executing this command, access to the Vapor Toolbox appears [137]. Use the Postman program to test the developed application interface. Postman allows verifying requests from the client to the server and the server’s response to requests. Then create a request to add a command to the scheme. Select the POST method, specify the endpoint address and port, form the body of the query, which will specify the attributes of the model, and click send. Then create a request to receive all commands from the scheme. Select the GET method, specify the endpoint address and port, click send. You can see the successful query code and the list of received items [131]. With the help of the abstract network layer Moya, we can configure communication with the server. To perform this, specify the server address, endpoints, HTTP method, request type, and headers [126]. The device control system (Figure 19) can be divided into the following modules: module “Dashboard”, module “Testing”, and module “Control” [99].
Figure 19

Device control system: (a) Module “Dashboard”; (b) Module “Testing”; (c) Module “Control”: submodule “Device connection”; (d) Module “Control”: submodule “Capture by device”.

The “Dashboard” module (Figure 19a) is presented as a list of possible options for using the system: testing a trained model using a camera of a mobile device (imitation of a manipulator camera), connecting to a robotic device and control, and viewing the history of actions on devices. The “Testing” module (Figure 19b) is used to test a trained convolutional network with the YOLOv2 architecture and the ReLU activation function in the convolution layers. This module simulates the transmission of a video stream from a robotic device and performs image processing using the specified network. The processing results are indicated in the block containing the number of objects recognized in the current image and belonging to the specified class. Module “Control” (Figure 19c,d) is the main unit of this intelligent control system. It contains the following services: local data storage service, network service, communication service, synchronization service, and data processing service. All these services are realized using the Swinject container. Module “Control”: submodule “Device connection” (Figure 19c) is needed to search for robotic devices that are in range and connect to them using Bluetooth technology. After a successful connection, a command is issued, which contains communication protocols with the device. Module “Control”: submodule “Capture by device” (Figure 19d) is designed to directly capture the recognized object and control it. The “History” module is presented as a collection with objects obtained using a network service, which in turn is realized using a Swinject container. The collection header contains a drop-down list with possible sorting options (by device ID, context, timestamp). The novelty of the obtained results consist of (a) expanding capabilities of the intelligent control system by increasing the number of recognition classes to five classes (pyramid, cone, cube, cylinder, and sphere) which, in contrast to the existing model with three recognition classes (cube, cylinder, and sphere), has a wider area of object recognition for further capture by the manipulator; (b) the neural network model with the ResNet34 architecture gained further development through the complex application of optimization techniques, including random rotations and shifts of the image center, splitting the training sample into batches with 64 images, using fine-tuning technology, which, in contrast to the existing model, provides increasing the training accuracy by 1.73% (from 98.02% to 99.75%) and the testing accuracy by 1.2% (from 98.0% to 99.2%).

8. Conclusions

This article presents an analytical review of the machine learning methods applied in different areas of human activity with a focus on robotic systems. Special attention is paid to increasing the efficiency of the sensor and control information processing in the advanced multi-component robotic complexes that function in non-stationary, uncertain, or unknown working environments. MCRC consists of a moving mobile robot and an adaptive robot with a fixed base (manipulator with adaptive gripper) as well as the sensor and control systems. Authors’ contributions demonstrate the increasing control quality and extending functioning properties of MCRC’s robotic components by using (a) a fuzzy logic approach for recognition of the slippage direction of a manipulated object in the robot fingers in collision with an obstacle; (b) neuro and neuro-fuzzy approaches for the design of intelligent controllers and clamping force observers of mobile robots with magnetically controlled wheels which can move working tools or an adaptive robot with the fixed base on inclined or ceiling ferromagnetic surfaces (ship hull, etc.); (c) a canonical decomposition approach from statistical learning theory for the prediction of robot control system states during robot mission in the nonstationary environment. To improve the reliability of control systems, it is proposed to use an operability-monitoring module that allows for the determination of possible system failures at future points in time. The used algorithm for forecast parameters of control systems, unlike the known methods (Wiener–Hopf method, Kolmogorov polynomial, Kalman filter, etc.), does not impose any restrictions on the random sequence of changes in system parameters (linearity, monotonicity, ergodicity, stationarity, Markov properties, etc.), which makes it possible to achieve the maximum accuracy of solving the problem of predictive operability monitoring. The results of the numerical experiment confirmed its high efficiency (the relative extrapolation error is 2–3%). Besides, the authors illustrate the efficiency of software used for the design of robot control systems and training of the developed convolutional neural network for recognition of the objects from different classes based on video-sensor information processing. A general structural diagram of the control system is formed at the system design stage, an entity-table with a relational DBMS is presented and a robotic arm is modeled using MoveIt software. Upon completion of the design, the stage of software implementation of the control system was carried out. The server is implemented using the Vapor web framework, the control system—using the Swift programming language and other technologies, and the configuration and creation of a neural network—using the CreateML framework. A neural network with the ResNet34 architecture trains quickly (3 epochs to achieve 99.2% testing accuracy) in comparison with Create ML (YOLOv2 architecture). The results of the authors’ investigation of the impact of dataset size on the training accuracy and testing accuracy are: (a) the training accuracy gradually increased from the 1st to the 5th epoch (77.8%, 94.8%, 95.3%, 96.7%, and 99.1%, respectively); (b) the testing accuracy increased from the 1st (88.4%) to the 5th epoch (98.0%); and (c) the neural network trained slower (122 s in 5 epochs) on the small size dataset (1000 training images) compared with the full dataset (75 s in 3 epochs), with 40,000 training images. Fine-tuning technology with ResNet34 architecture increases training accuracy (to 99.75%) and testing accuracy (to 99.2%) for recognition of five different classes using the Torch library (Python).

9. Patents

Patent No. 52080, Ukraine, 2010. Y. P. Kondratenko, et al. Intelligent sensor system. Patent No. 45369, Ukraine, 2009. Y. P. Kondratenko, et al. Propulsion wheel of mobile robot. Patent No. 47369, Ukraine, 2010. Y. P. Kondratenko, et al. Method of magnetically operated displacement of mobile robot. Patent No. 100341, Ukraine, 2015. V. O. Kushnir, Y. P. Kondratenko, et al. Mobile robot for mechanical clearing ship hull. Patent No. 73855, Ukraine, 2012. I. P. Atamanyuk, Y. P. Kondratenko. Method for prediction of object technical state.
  11 in total

Review 1.  Surgical robotics beyond enhanced dexterity instrumentation: a survey of machine learning techniques and their role in intelligent and autonomous surgical actions.

Authors:  Yohannes Kassahun; Bingbin Yu; Abraham Temesgen Tibebu; Danail Stoyanov; Stamatia Giannarou; Jan Hendrik Metzen; Emmanuel Vander Poorten
Journal:  Int J Comput Assist Radiol Surg       Date:  2015-10-08       Impact factor: 2.924

2.  Influence of Trajectory and Dynamics of Vehicle Motion on Signal Patterns in the WIM System.

Authors:  Artur Ryguła; Andrzej Maczyński; Krzysztof Brzozowski; Marcin Grygierek; Aleksander Konior
Journal:  Sensors (Basel)       Date:  2021-11-26       Impact factor: 3.576

3.  Speed Control for Leader-Follower Robot Formation Using Fuzzy System and Supervised Machine Learning.

Authors:  Mohammad Samadi Gharajeh; Hossein B Jond
Journal:  Sensors (Basel)       Date:  2021-05-14       Impact factor: 3.576

4.  Adoption of Machine Learning Algorithm-Based Intelligent Basketball Training Robot in Athlete Injury Prevention.

Authors:  Teng Xu; Lijun Tang
Journal:  Front Neurorobot       Date:  2021-01-15       Impact factor: 2.650

5.  An Efficient and Reliable Algorithm for Wireless Sensor Network.

Authors:  Faheem Khan; Shabir Ahmad; Hüseyin Gürüler; Gurcan Cetin; Taegkeun Whangbo; Cheong-Ghil Kim
Journal:  Sensors (Basel)       Date:  2021-12-14       Impact factor: 3.576

6.  Deep Auto-Encoder and Deep Forest-Assisted Failure Prognosis for Dynamic Predictive Maintenance Scheduling.

Authors:  Hui Yu; Chuang Chen; Ningyun Lu; Cunsong Wang
Journal:  Sensors (Basel)       Date:  2021-12-15       Impact factor: 3.576

View more
  1 in total

1.  A Model for Predicting Cervical Cancer Using Machine Learning Algorithms.

Authors:  Naif Al Mudawi; Abdulwahab Alazeb
Journal:  Sensors (Basel)       Date:  2022-05-29       Impact factor: 3.847

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.