Literature DB >> 30127312

A New Approach to Unwanted-Object Detection in GNSS/LiDAR-Based Navigation.

Mathieu Joerger1, Guillermo Duenas Arana2, Matthew Spenko3, Boris Pervan4.   

Abstract

In this paper, we develop new methods to assess safety risks of an integrated GNSS/LiDAR navigation system for highly automated vehicle (HAV) applications. LiDAR navigation requires feature extraction (FE) and data association (DA). In prior work, we established an FE and DA risk prediction algorithm assuming that the set of extracted features matched the set of mapped landmarks. This paper addresses these limiting assumptions by incorporating a Kalman filter innovation-based test to detect unwanted object (UO). UO include unmapped, moving, and wrongly excluded landmarks. An integrity risk bound is derived to account for the risk of not detecting UO. Direct simulations and preliminary testing help quantify the impact on integrity and continuity of UO monitoring in an example GNSS/LiDAR implementation.

Entities:  

Keywords:  GNSS; LiDAR; autonomous cars; detection; integrity monitoring; navigation; safety

Year:  2018        PMID: 30127312      PMCID: PMC6111790          DOI: 10.3390/s18082740

Source DB:  PubMed          Journal:  Sensors (Basel)        ISSN: 1424-8220            Impact factor:   3.576


1. Introduction

This paper describes the design, analysis, and preliminary testing of a new method to quantify safety in GNSS/LiDAR navigation systems. An integrity risk bound is derived, which accounts for failures to detect undesirable, unmapped and wrongly extracted obstacles. The paper describes an innovation-based method, which is an alternative to the solution separation approach used in [1]. In addition, the paper provides the means to quantify the impact of unwanted objects (UO) on the risk of incorrect association. This work is intended for driverless cars, or highly automated vehicles (HAV) [2,3], operating in changing environments where unknown, moving obstacles (cars, buses, and trucks) are not wanted as landmarks for localization, and may occlude other useful, mapped landmarks. This research leverages prior analytical work carried out in civilian aviation navigation where safety is assessed in terms of integrity and continuity [4]. These performance metrics are sensor- and platform-independent. Integrity is a measure of trust in sensor information: integrity risk is the probability of undetected sensor errors causing unacceptably large positioning uncertainty [4]. Continuity is a measure of the navigation system’s ability to operate without unscheduled interruption. Both loss of integrity and loss of continuity can place the HAV in hazardous situations [4,5]. Several methods have been established to predict integrity and continuity risks in GNSS-based aviation applications [6,7,8]. Unfortunately, the same methods do not directly apply to HAVs, because ground vehicles operate under sky-obstructed areas where GNSS signals can be altered or blocked by buildings and trees. HAVs require sensors in addition to GNSS, including LiDARs, cameras, or radars. This paper focuses on LiDARs because of their prevalence in HAVs, of their market availability, and of our prior experience. A raw LiDAR scan is made of thousands of data points, each of which individually does not carry any useful navigation information. Raw measurements must be pre-processed before they can be used to estimate HAV positioning and orientation (or pose). A first class of algorithms establishes correlations between successive scans to estimate sensor changes in ‘pose’ (i.e., position and orientation) [9,10,11,12]. These procedures, including the Iterative Closest Point (ICP) approach [13], can become cumbersome when evaluating safety of HAVs moving over time. A second class of algorithms provides sensor localization by tracking recognizable, static features in the perceived environment (seminal references and survey papers can be found in [14,15,16,17,18,19]). Features can include, for example, lines or planes corresponding to building walls in two- or three-dimensional scans, respectively. Previous knowledge of feature parameters can be provided either from a landmark map, or from past-time estimation in Simultaneous Localization and Mapping (SLAM) [15,20]. The resulting information can then be iteratively processed using sequential estimators in SLAM (e.g., Extended Kalman filter or EKF), which is convenient in practical implementations. To estimate the HAV’s pose starting from a raw LiDAR scan, two intermediary, pre-estimator procedures must be carried out: feature extraction (FE), and data association (DA). First, FE aims at finding the few most consistently recognizable, viewpoint-invariant, and mutually distinguishable landmarks in the raw sensor data. Second, DA aims at assigning the extracted features to the corresponding feature parameters assumed in the estimation process, i.e., at finding the ordering of mapped landmarks that matches the ordering of extracted features over successive observations. Incorrect association is a well-known problem that can lead to large navigation errors [21], thereby representing a threat to navigation integrity. FE and DA can be challenging in the presence of sensor uncertainty. This is why many sophisticated algorithms have been devised [17,18,19,21,22,23]. But, how can we prove whether FE and DA are safe for life-critical HAV navigation applications? This research question is mostly unexplored. Several publications on multi-target tracking describe relevant approaches to evaluate the probability of correct association in the presence of measurement uncertainty [24,25]. However, these algorithms are not well suited for safety-critical HAV applications due to their lack of prediction capability, to approximations that do not necessarily upper-bound risks, and to high computational loads. Also, the risk of FE is not addressed. Overall, research on integrity and continuity of FE and DA is sparse. This paper builds upon prior work in [1,26,27,28], where we developed an analytical integrity risk prediction method using a multiple-hypothesis innovation-based DA process. We established a compact expression for the integrity risk of LiDAR-based pose estimation over successive iterations. However, references [26,27,28] made simplifying assumptions that limit the applicability of these prior results. For example, we assumed that the set of landmarks in the a-priori map was exactly the same as the one being extracted. This assumption was relaxed in [1] where we developed an integrity-risk-minimizing data-selection method. To achieve this, we derived a bound on the risk of incorrect association, with which a subset of measurements can be used while considering potential wrong associations with all landmarks surrounding the LiDAR. This bound was used in a preliminary approach to detect UO using solution separation tests. In practice, UO such as other vehicles passing by are likely to be extracted, and may even occlude other mapped landmarks. Obstacle detection methods have been developed to mitigate the impact of such UOs (example methods are described in [29,30]). But, the safety risks of using UOs as landmarks for navigation have yet to be fully quantified. In response, in this paper, we derive new methods to quantify the integrity risk caused by failures to detect unwanted obstacles (UO), while guaranteeing a predefined false alert risk requirement. Section 2 of the paper provides an overview of the risk evaluation methods developed in [1,26,27,28], and of their limitations. These methods use a nearest-neighbor DA criterion [9], defined by the minimum normalized norm of the EKF innovation vectors over all possible landmark permutations. Section 3 and Section 4 deal with the situation where a mapped landmark is not extracted, but another unknown obstacle is extracted instead (e.g., case of an obstacle masking a mapped landmark). This paper assumes that UOs only mask one unknown landmark at a time as the HAV drives by. Section 3 describes the innovation-based approach employed to detect the UO (which differs from the solution separation detector employed in [1]). An integrity risk bound is then derived to incorporate the risk of not detecting a UO when one might be present. This bound is analytically evaluated in two steps in Section 4: we account for the impact of undetected UO: (a) on the probability of hazardously misleading information (HMI) under the correct association (CA) hypothesis, and (b) on the probability of incorrect association (IA). Navigation integrity performance is then assessed in Section 5 using direct simulations and preliminary testing for an example implementation using GNSS and two-dimensional LiDAR data.

2. Background: Integrity Risk Bound Accounting for Incorrect Associations

This section presents an overview of the integrity risk evaluation method described in [1,26,28], which uses a multiple-hypothesis innovation-based DA process.

2.1. Integrity Risk Definition and Integrity Risk Bound

The integrity risk, or probability of hazardous misleading information (HMI) at time , is noted , and is defined in Figure 1. The safety criterion is: where is a predefined integrity risk requirement set by a certification authority (similar to requirements set for aviation applications in [4,8]). Values for that might be used in future HAV applications can be found in [5].
Figure 1

Defining Integrity Risk for Automotive Applications. The integrity risk is the probability of the car being outside the alert limit requirement box (blue shaded area) when it was estimated to be inside the box. When lateral deviation is of primary concern, then the alert limit is the distance between edge of car and edge of lane.

In [26,28], we established an analytical bound on the integrity risk, which accounts for the risk of incorrect associations. This bound is expressed as: with where The integrity risk bound in Equation (1) is refined in this paper to account for the presence of UOs and for failures to detect them. Equation (1) captures a key tradeoff in data association: on the one hand, using only few measurements can cause a large nominal estimation error and hence large ; but on the other hand, few measurements from sparsely distributed landmarks can improve because features are “separated”, distinguishable, and therefore can be robustly associated. is unknown, but we can assess safety by comparing to the upper bound given in Equations (1)–(3), where all terms are known.

2.2. Innovation-Based Data Association

Equation (1) is derived for an innovation-based DA process, which is further described in the following paragraphs. Let be the total number of visible landmarks and the number of estimated feature parameters per landmark. Feature parameters can include landmark position, size, orientation, surface properties, etc. When using LiDAR only (we integrate GNSS in Section 5), the total number of feature parameters within the visible landmark set is: . We can stack the actual (true) values of the extracted feature parameters for all landmarks in an vector . Let be an estimate of . We assume that the cumulative distribution function of can be bounded by a Gaussian function with mean and covariance matrix [31,32,33]. We use the notation: . The nonlinear measurement equation can be written in terms of the state parameter vector as where The mean of is . Equation (4) can be linearized about an estimate of : The ordering of landmarks in is arbitrary and unknown. A nearest-neighbor approach (described below) is used to determine the ordering of measurement-to-state coefficients in and . Failing to find the landmark ordering that matches that of causes estimation errors called incorrect associations (IA). If landmarks are extracted, there are ways to arrange measurements in , which we call candidate associations. For clarity of exposition, we assume that the total number of mapped landmarks, or of previously observed landmarks when using SLAM, is also the number of extracted landmarks (procedures to address this assumption are given in [1]). Let subscript designates association hypotheses, for , where . We define the fault-free, correct association (CA) hypothesis, and the other hypotheses are IA. IA impacts the EKF estimation process through the innovation vector . Vector is an effective indicator of CA because it is zero mean only for the correct association. In all IA cases, the mean of is not zero and is expressed in terms of permutation matrices , for , as where where is the EKF state prediction error vector () and is the identity matrix. Let be the EKF state prediction error covariance matrix. We select the association candidate that satisfies the nearest-neighbor association criterion [9], defined as where The probability of correct association is the probability of the following event occurring: . We can determine the a priori distributions of variables , for , except their mean values that are unknown. In [28], we show that the term used in Equation (1) is a lower bound on the mean innovation’s norm (). Equation (1) is a bound on , but it assumes that no UO is present. We first design a UO detector and derive a new bound in Section 3, and then we establish an analytical method to evaluate the impact of undetected UOs on this new bound in Section 4.

3. Risks Involved with Unwanted Object Detection

In the presence of a UO, the innovation vector’s norm in Equation (9) is nonzero under all association hypotheses. In this case, the correct association hypothesis must be redefined. We call correct association (CA) the one where all landmarks that are not occluded by a UO are correctly associated, i.e., where the innovation vector would be zero mean if the UO was removed. The nonzero mean in the CA’s innovation vector is caused by the UO only, not by other incorrectly associated landmarks.

3.1. Innovation-Based Detector

If a UO is present, does not have a mean of zero even under CA. To identify such events, we can set a threshold on the minimum innovation norm squared, or, since the process is performed over time, on the running sum of minimum innovation norms squared. Using innovations (instead of solution separations as in [1]) will facilitate evaluation of in Section 4. The UO detection test statistic is defined as Since the innovation sequence is white, is non-centrally chi-squared distributed with degrees of freedom and noncentrality parameter (NCP) . We use the notation . , which is further discussed in Section 4. The detection threshold is set according to a continuity risk requirement to limit the risk of false alerts. False alerts occur when no UO is present, causing ’s NCP to be zero under CA. Thus, is given by where is the inverse cumulative distribution function (CDF) of the chi-squared distribution evaluated at the quantile. If is exceeded, we interrupt the mission. (As an alternative to mission interruption, we could select a different set of landmark feature measurements as in [1,34], but this is beyond the scope of this paper.) This does not impact . However, if is not exceeded, a UO may still be present because the detection test statistic is a random, noisy variable. Navigation errors due to undetected UOs can cause the vehicle to crash.

3.2. Integrity Risk in Presence of UO

To quantify the integrity risk caused by potentially undetected UOs, the definition in Equation (1) is modified: HMI is the joint event of the car being out of lane while no alert has been sent. The integrity risk is redefined as where is the EKF state estimation error for the state of interest, e.g., for the vehicle’s lateral deviation within its lane. Because and are obtained after associating LiDAR data to a landmark map, we consider a set of mutually exclusive, exhaustive hypotheses of correct associations (CA) and incorrect associations (IA). We derived the following bounds: where In Section 4, we derive upper bounds on and .

4. Analytical Bounds on Risks Caused by Undetected Unwanted Objects

As stated in Section 1, this paper assumes that UOs only mask one unknown landmark at a time as the HAV drives by. This can be extended to multiple UOs masking one subset of landmarks at a time, using the procedures described in [1]. However, the performance analysis in Section 5 does not illustrate this case. The limitation is that the UO-free subset must be large enough to enable HAV pose estimation; the method requires landmark redundancy because it assumes an uncertain vehicle dynamic model and no inertial navigation system.

4.1. Risk of HMI Due to Undetected UO

We consider a set of mutually exclusive, exhaustive hypotheses of a UO masking a landmark (or landmark subset ) for , where is the total number of hypotheses. We note the fault-free (no UO) hypothesis. Using the law of total probability, is rewritten as We have no prior knowledge on the probability of occurrence of , but we can bound the sum of their occurrence probabilities by 1. Thus, can be upper-bounded using the following expression: Recalling that and are statistically independent (e.g., [35,36]), we can rewrite the bound in Equation (15) as Under the correct association hypothesis (), the distributions of and are known except for mean values and . Thus, Equation (16) can be upper-bounded using receiver autonomous integrity monitoring (RAIM) methods [6,7,34,35,36,37]. A UO causes a shift in the mean of and in the NCP of . Large UO-induced feature measurement errors cause large (i.e., high risk of HI) but also cause large , which makes the UO easier to detect (i.e., low risk of ND). To analyze this tradeoff, innovation-based chi-squared RAIM methods consider the failure mode slope (FMS) [34,35,36,37]. Given a UO hypothesis for , the FMS is the ratio of the mean estimation error over the NCP of the test statistic . Recent analytical results in [35] were established in the context of GNSS/INS integration. They provide the means to recursively determine the FMS when using an EKF for estimation and a sequence of innovations for detection. We use this method to determine the bound in Equation (16) for the risk-maximizing hypothesis for , i.e., for the worst-case FMS : where is a search parameter (called the fault magnitude in [36]) that is easily determined at each time step using a one-dimensional search, e.g., using an interval-halving method [36], and where

4.2. Risk of Incorrect Association Due to Undetected UO

This subsection aims at evaluating the other unknown term in Equation (13): . The presence of a UO can cause the risk of IA to grow without bound. In this case again, the detector is leveraged to limit the impact of UO on safety risks. However, in contrast with Section 4.1, two major challenges must be tackled to upper-bound : the events and are correlated because both events depend on the same innovation vectors; and unlike on the left-hand side in Equation (17), there is no condition on association (no “given ”), so we do not know which association is used to compute the innovations in the detection test statistic . In response, we used an approach based on the minimum detectable error (MDE) concept used in the GPS Local Area Augmentation System (LAAS) [4,38,39]. The MDE is a probabilistic bound on the NCP of the chi-squared detection test statistic. The Appendix A shows that where is the MDE due to a UO at time . can be computed using the following equation: The probability is an integrity risk requirement allocation, i.e., a fraction of such that . is the smallest value that the detection test statistic NCP can take to ensure that the risk of no detection stays below . is a probabilistic bound, not a random variable (which addresses challenge (i) above), and is independent of the association candidate (Equation (20) only depends on the number of degrees of freedom, thus addressing (ii)).

4.3. Summary of the New Integrity Risk Bound, Accounting for Presence of UO

In the presence of UOs due to wrong landmark feature extraction, the probability of hazardous misleading information (HMI) at time can be bounded by the following expression: with where

5. Performance Analysis

In this section, example simulations and testing introduced in [26,27,28,40,41] are employed to compare the bounds assuming no UOs in Equations (1)–(3) versus accounting for possible UOs in Equations (21)–(23).

5.1. Direct Simulation: Vehicle Roving through a GNSS-Denied Area

This analysis investigated the safety performance of a GPS/LiDAR navigation system onboard a vehicle roving through a forest-type environment. GPS signals were blocked by the tree canopy, and low-elevation satellite signals did not penetrate under the trees. Tree trunks served as landmarks for a two-dimensional LiDAR using a SLAM-type algorithm. The measurement vector in Equation (4) was augmented with GPS code and carrier measurements. The state vector was augmented to include an unknown GPS receiver clock bias and carrier phase cycle ambiguities. Time-correlated GPS signals and nonlinear LiDAR data were processed in a unified time-differencing EKF derived in [33,34]. The main simulation parameter values are listed in Table 1, and a differential GPS measurement error model was used, which is fully described in [41]. In this scenario, GPS and LiDARs essentially relayed each other with seamless transitions from open sky through GPS-denied areas where landmarks were modeled as poles with nonzero radii.
Table 1

Simulation parameters.

System ParametersValues
Standard deviation of raw LiDAR ranging measurement0.02 m
Standard deviation of raw LiDAR angular measurement0.5 deg
LiDAR range limit20 m
GNSS and LiDAR data sampling interval0.5 s
Standard deviation of raw GNSS code ranging signal1 m
Standard deviation of raw GNSS carrier ranging signal0.015 m
GNSS multipath correlation time constant90 s
Vehicle speed1 m/s
Alert limit 0.5 m
Integrity risk allocation for FE, IFE,k10−9
Integrity risk allocation for MDE, IMDE,k10−10
Continuity risk requirement, CREQ,k10−3
As shown in Figure 2, Figure 3 and Figure 4 and 6, we consistently employed the following yellow-green-blue color code: the mission started with the vehicle operating in a GPS available area (yellow-shaded). Satellite signals available during initialization enabled accurate estimation of cycle ambiguities, so that vehicle positioning uncertainty did not exceed a few centimeters. Then, as the vehicle moved and crossed the GPS- and LiDAR-available area (green-shaded) and the LiDAR-only area (blue-shaded), seamless variations in covariance were achieved. A detailed description of this simulation is given in [41]. In this scenario, the likelihood of IA is high.
Figure 2

Simulation results assuming no unwanted objects (UO). (top left) On the upper plot, the thick black line represents the actual cross-track positioning error and the thin line is the one-sigma covariance envelope. The lower plot shows P(HI) bounds for the GPS-denied area crossing scenario. (top right) Snapshot vehicle-landmark geometry at the time step corresponding to the large increase in P(HI) Bound (time = 29 s). (bottom left) Azimuth elevation sky plot showing GPS satellite geometry at time = 29 s. (bottom right) Snapshot LiDAR scan at time = 29 s when landmark “1” is hidden behind landmark “4”.

Figure 3

P(HMI) bounds taking into account the possibility of IA and the potential presence of UOs. The difference between the dashed black line and the solid black line quantifies the impact on P(HMI) of undetected UOs when assuming correct association (CA). The difference between the dashed red line and the solid red line measures the impact on P(HMI) of undetected UOs when accounting for incorrect associations.

Figure 4

Simulation results accounting for UOs. (a) P(HMI)-bound contributions under each UO hypothesis (H0 assumes no UO, H1 assumes a UO masks landmark “1”, etc.): the overall risk is the thick green line. (b) Color-coded landmark geometry: the color code identifies which landmark is masked by a UO under the corresponding hypothesis in the left-hand-side plot.

First, as shown in Figure 2, we assumed that no UO was present but IAs occurred. One indicator of IA is displayed on the top of the upper left-hand-side (LHS) plot in Figure 2. It shows that the actual cross-track positioning error (thick black line) versus distance travelled exceeded the corresponding one-sigma covariance envelope (thin black line). This suggests that errors impacting positioning are not captured by the covariance. This is confirmed on the lower part of the upper LHS chart in Figure 2, where the black curve showing the bound stayed below . This curve can directly be derived from the EKF covariance. It does not account for IA. In contrast, the red -bound curve reached a first plateau of = 10−9 as soon as two landmarks were visible by design of our risk evaluation method [28]. The curve then suddenly increased to 10−5 at approximately 29 m of travel distance. To explain this sudden jump, the top right-hand-side (RHS) chart in Figure 2 shows that, at the travel distance of 29 m (i.e., at travel time = 29 s) corresponding to the large increase in predicted integrity risk, landmark “1” was hidden behind landmark “4”. To the LiDAR, landmark “1” became visible again at the next time step, which made correct measurement association with either landmark “1” or “4” extremely challenging. The bound accounted for the risk caused by such events. This is consistent with other results presented in [1,26,27,28]. The bottom LHS chart in Figure 2 shows the simulated GPS satellite geometry on an azimuth elevation plot of the sky. At travel time 29 s, the tree canopy blocked all satellite signals. The bottom RHS chart displays the simulated LiDAR measurements showing again that landmark “1” was not visible from the LiDAR’s viewpoint. In Figure 3, the risk of having a UO occluding a landmark is taken into account. Our new integrity risk evaluation method was implemented. We could quantify the impact on P(HMI) of undetected UOs assuming systematic CA by measuring the difference between the dashed black line derived using [28] and the solid black line . We noticed again that (directly derived from the EKF covariance) was a poor safety metric because it stayed below , whereas , accounting for UOs, exceeded . In parallel, the red curves account for the risk of incorrect association (IA). The difference between the dashed red line and the solid red line, which respectively reached and above , shows the impact on P(HMI) of undetected UOs. To better understand the shape of the overall bound, Figure 4 shows the contributions of each single-UO hypothesis (assuming no UO, assuming a UO masking landmark “1”, assuming a UO masking landmark “2”, etc.). In Figure 4, the color code used in the LHS graph is also employed in the RHS plot to represent the landmark involved in the corresponding fault hypothesis. Peaks in -bound contributions occurred when the landmark geometry and redundancy was too poor to ensure reliable detection of a given UO. The overall bound was the maximum of all the contributions at each time step and is represented with a thick green line.

5.2. Preliminary Testing in an Incorrect-Association-Free Environment

Preliminary experimental testing was carried out using data collected in a structured environment shown in Figure 5. Static simple-shaped landmarks were located at locations sparse enough to ensure successful outcomes for FE and DA. Because the results presented here were free of incorrect associations, was expected to match . This test data was used to focus on the risk of UO misdetection.
Figure 5

Experimental setup of a forest-type scenario, where a GPS/LiDAR-equipped rover is driving by six landmarks (cardboard columns) in a GPS-denied area. GPS is artificially blocked by a simulated tree canopy and a precise differential GPS solution is used for truth trajectory determination.

Measurements from carrier phase differential GPS (CPDGPS) as well as LiDAR scanners were synchronized and recorded. In order to obtain a full 360-degree LiDAR scan, two 180-degree LiDAR scanners were assembled back-to-back. The LiDAR scanners had a specified 15–80-m range limit, a 0.5-degree angular resolution, a 5-Hz update rate, and a ranging accuracy of 1–5 cm (1 sigma) [42]. The GPS antenna was mounted on top of the front LiDAR. The lever-arm distance between the two LiDARs was accounted for. The two LiDARs and the GPS antenna were mounted on a rover also carrying the GPS receiver and data-link. An embedded computer onboard the vehicle recorded all measurements including the raw GPS data from the reference station transmitted via a wireless spread-spectrum data-link. Truth trajectory was obtained using a fixed CPDGPS solution. The upper LHS chart in Figure 6 confirms that this is an incorrect-association-free scenario because the actual error (thick line) fits within the covariance envelope (thin line) throughout the test. In addition, the lower LHS graph in Figure 6 shows -bound contributions for each single-UO hypothesis. The six bounds corresponding to UO hypotheses are shown using the same color code as in Figure 4, and the UO-free hypothesis is the dashed line. The color code is used on the RHS chart, which also shows the landmark geometry. In the LHS graph, increases substantially when accounting for undetected UO (thick black curve), as compared to ignoring their potential presence (dashed red line). UO occluding landmarks “1” and “2” cause by far the largest increase in bound. In this SLAM-type implementation where the map is built incrementally, landmarks observed early in the rover trajectory play a key role throughout the mission, which explains the method’s sensitivity to potential extraction faults on landmarks “1” and “2”. In future work, we will try to reduce the bound using redundant information from other sensors, from additional landmarks, and from additional landmark features.
Figure 6

Experimental results accounting for UOs (a) P(HMI)-bound contributions for each unmapped object (UO) hypothesis for the preliminary experimental dataset: the overall risk is the thick black line. (b) Color-coded subsets identifying which landmark is occluded by a UO under each one of the six single-UO hypotheses.

6. Conclusions

This paper presents a new approach to improve the safety of LiDAR-based navigation by quantifying the risks of missed detection of unwanted objects (UO). UOs can occlude useful landmarks, thereby causing large navigation errors. We established a bound on the integrity risk caused by UOs. First, we presented an innovation-based detector, and we established an analytical expression for the impact of undetected UO on the positioning error assuming correct association. Then, we derived a bound on the risk of incorrect association (IA) in the presence of UO. Direct simulation and preliminary testing in a structured environment demonstrated the proposed method’s ability to quantify safety risks in the presence of both UOs and IAs. It showed, for example, that the Kalman filter covariance is a poor metric of safety performance. The analysis of our preliminary experimental results suggests that additional redundant information from other sensors would be needed to safely detect UOs in the LiDAR’s surroundings.
  2 in total

1.  A general purpose feature extractor for light detection and ranging data.

Authors:  Yangming Li; Edwin B Olson
Journal:  Sensors (Basel)       Date:  2010-11-17       Impact factor: 3.576

2.  A New 3D Object Pose Detection Method Using LIDAR Shape Set.

Authors:  Jung-Un Kim; Hang-Bong Kang
Journal:  Sensors (Basel)       Date:  2018-03-16       Impact factor: 3.576

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.