Literature DB >> 29882852

Angle Measurement of Objects outside the Linear Field of View of a Strapdown Semi-Active Laser Seeker.

Yongbin Zheng1, Huimin Chen2, Zongtan Zhou3.   

Abstract

The accurate angle measurement of objects outside the linear field of view (FOV) is a challenging task for a strapdown semi-active laser seeker and is not yet well resolved. Considering the fact that the strapdown semi-active laser seeker is equipped with GPS and an inertial navigation system (INS) on a missile, in this work, we present an angle measurement method based on the fusion of the seeker’s data and GPS and INS data for a strapdown semi-active laser seeker. When an object is in the nonlinear FOV or outside the FOV, by solving the problems of space consistency and time consistency, the pitch angle and yaw angle of the object can be calculated via the fusion of the last valid angles measured by the seeker and the corresponding GPS and INS data. The numerical simulation results demonstrate the correctness and effectiveness of the proposed method.

Entities:  

Keywords:  GPS and INS; angle measurement for all-strapdown semi-active laser seeker; data fusion; four-quadrant photoelectric detector; space consistency and time consistency

Year:  2018        PMID: 29882852      PMCID: PMC6021938          DOI: 10.3390/s18061673

Source DB:  PubMed          Journal:  Sensors (Basel)        ISSN: 1424-8220            Impact factor:   3.576


1. Introduction

Semi-active laser guidance, which has high precision and is easy to implement, is widely used in precision-guided weapons and equipment [1,2,3,4]. The core device is the semi-active laser seeker [5,6]. It receives the laser spot reflected by an object and detects the precise coordinates of the laser spot center by using a four-quadrant detector. It then calculates the pitch angle and the yaw angle between the object and the seeker’s optical axis. However, a semi-active laser seeker cannot measure the angles when the object is out of the linear field of view (FOV) [7]. This problem is even worse for a strapdown semi-active laser seeker [8]. It is still one of the bottlenecks that restricts the overall application of strapdown semi-active laser seekers.

1.1. Detection Principle of the Strapdown Semi-Active Laser Seeker

The strapdown semi-active laser seeker is composed of a four-quadrant detector, optical system, circuit system and shell. The four-quadrant detector consists of four photosensitive sectors with the same area and the same photoelectric response [9], which are represented by I, , and , respectively (as shown in Figure 1). Let R be the radius of the photosensitive surface, r be the radius of the reflected laser spot, be the coordinates of the laser spot center and be the output current of the i-th photosensitive sectors, which is proportional to the energy of the received laser spot. There are three cases in which can be detected using a four-quadrant detector. The first meets the condition , that is the laser spot is located in the four-quadrant detector and covers each of the four photosensitive sectors (as shown in Figure 1). Then, can be calculated by [10,11]:
Figure 1

Detection principle of a four-quadrant detector. (a) The spot lies in the center; (b) The spot lies within the linear area.

Obviously, in the case shown in Figure 1a. The origin of the seeker’s coordinate system is the origin of the four-quadrant detector, the seeker’s X axis is along the optical axis of the optical system, and the seeker’s Y and Z axes are along the Y and X axes of the four-quadrant detector, respectively. Based on , the pitch angle and the yaw angle of the object relative to the seeker’s coordinate system can be calculated using: where f is the focal length of the seeker’s optical system. In this case, the satisfied area on the four-quadrant detector is called the linear area of the detector, and the corresponding FOV of the seeker is called the linear FOV. The second case is when the laser spot is within the photosensitive surface of the four-quadrant detector, but it cannot cover all four quadrants of the photosensitive sector, as shown in Figure 2a. In this case, cannot be precisely determined using Equation (1), and it can only be known in which quadrant the laser spot center is located. This area on the four-quadrant detector is called the nonlinear area, and the corresponding FOV of the seeker is called the nonlinear FOV. The third case is when the laser spot is outside the FOV of the semi-active laser seeker, as shown in Figure 2b. In this case, the four-quadrant detector cannot detect any information regarding the laser spot. Therefore, we must ensure that the object is within the linear FOV of the semi-active laser seeker.
Figure 2

Examples of laser spot distribution on the four-quadrant detector. (a) The spot lies in the nonlinear area; (b) The spot is located outside the FOV.

1.2. Related Work

To ensure that the reflected laser spot lies in the linear area of the four-quadrant detector, the traditional semi-active laser seeker adopts a platform structure, that is the four-quadrant detector is installed on a complicated and high-precision servo control system [12]. The servo control system, which is composed of inertial measurement components and a dynamic follow-up system, can isolate the attitude movements of the seeker and ensure that the object is always located in the linear FOV of the seeker [4,13]. Although the platform-type semi-active laser seeker is a mature product, it has many disadvantages, such as a complex structure, high cost and large volume. In recent years, the strapdown semi-active laser seeker has become one of the main development directions of the semi-active seeker [14,15]. It removes the high-precision servo control system and places the four-quadrant detector directly onto the longitudinal axis of the seeker. The advantages are its simpler structure, higher reliability, smaller size, lighter weight and lower cost [16]. The main disadvantage is that due to the detector moving with the seeker, the problem of objects going outside the linear FOV is exacerbated. The current approach is to increase the linear FOV of the seeker via a special optics design [17,18,19,20]. However, there are three shortcomings to this approach. First, the amount by which the linear area of the detector can be expanded and the seeker FOV can be increased via optics design is very limited. Second, when the linear FOV increases, the angle measurement accuracy of the seeker will decrease, which will affect the guidance accuracy of the seeker [8]. Third, increasing the FOV of the seeker results in a shortening of the detection range, and the detection range is very important for the terminal guidance of a missile. Therefore, the problem of angle measurement for a strapdown laser seeker when objects are outside the linear FOV still hinders the full application of the strapdown semi-active laser seeker. Considering the fact that the strapdown semi-active laser seeker is equipped with GPS and inertial attitude measurement equipment on the missile [21,22], in this work, we make full use of GPS and INS data and propose an angle measurement method for the strapdown semi-active laser seeker through data fusion.

2. Proposed Method

When an object is outside the linear FOV, by solving the space consistency problem and the time consistency problem, the pitch angle and the yaw angle can be calculated by fusing the following data: the current GPS and inertial attitude data, the angles measured by the seeker at the last moment when the object is in the linear FOV and the corresponding GPS and inertial attitude data at that moment. The following gives the specific details of this method.

2.1. Definition of Variables

Assume to be the last moment at which an object is located within the linear FOV of a strapdown semi-active laser seeker, and let the pitch angle and the yaw angle of the object measured by the seeker at be and , respectively. Suppose that at time , the object is outside the linear FOV of the seeker. Then, the pitch angle and the yaw angle cannot be measured by the seeker and will be calculated using the proposed method. Our method needs the following data: the object position (longitude , latitude and height ), which is given in advance; the position of the seeker at (longitude , latitude and height ); the attitude data of the seeker with the yaw-pitch-roll rotation order at time (yaw angle , pitch angle and roll angle ) or the quaternions at (); the position of the seeker at (longitude , latitude and height ); and theattitude data of the seeker with the yaw-pitch-roll rotation order at time (yaw angle , pitch angle and roll angle ) or the quaternions at (). The above positions and attitude data can be obtained via GPS and INS [23,24]. The problem of time consistency can be solved by precisely aligning the above data with the corresponding time.

2.2. Definitions of the Coordinate Systems

To solve the space consistency problem, we define the following coordinate systems. The Earth-centered frame -: The origin is the center of the Earth. The axis is perpendicular to the Earth’s equatorial plane and points toward the North Pole. The axis lies in the Earth’s equatorial plane and points to the Greenwich meridian. The axis is perpendicular to the plane and forms a right-hand coordinate system with and . The local navigation frame -: The local navigation frame is defined as having a north-up-east order. The origin is the centroid of the seeker. The axis is collinear with the normal of the navigation frame’s reference ellipsoid at the penetration point. The axis lies in the Meridian plane, is perpendicular to and points toward the north. The axis is determined according to the right-hand rule. The body frame -: The origin is the centroid of the seeker. The axis coincides with the longitudinal axis of the seeker and points toward the forward direction. The axis lies in the longitudinal symmetry plane of the seeker, is perpendicular to and points upward. The axis is perpendicular to the plane and forms a right-hand coordinate system with and . The on body line-of-sight frame -: The origin is the centroid of the seeker. points toward the object along the line of sight. The axis, which points upward, is on a plane that contains and is perpendicular to and the plane at the same time. is determined by the right-hand rule.

2.3. Analysis and Computation of the Proposed Method

In practice, the seeker moves with the missile all the time; thus, the position and attitude of the seeker at are different from those at . As shown in Figure 3, let be the seeker position at and - be the body frame at . Moreover, let be the seeker position at and - be the body frame at . From to , the body frame has both attitude movements and position movements simultaneously. Without loss of generality, we assume that the attitude movements occur first; the body frame - transforms into the intermediate body frame -; and the, a translational movement occurs, with - being translated to -. Therefore, the analysis and calculation required to solve the space consistency problem can be conducted in two stages.
Figure 3

Equivalent decomposition of the seeker’s movement from to .

2.3.1. First Stage of the Proposed Method

In the first stage, we consider only the variation of the pitch angle and the variation of the yaw angle caused by the seeker’s attitude motion from to . In this stage, the pitch angle and yaw angle of the object should be calculated in the intermediate body frame -, with the calculation involving the following coordinate transformations: - is transformed into the local navigation frame -; then - is transformed into the body frame -; and finally, - is transformed into the on body line-of-sight frame -. The specific steps are as follows. Step 1: - is transformed into - by the roll-pitch-yaw rotation order with the rotations of , and , respectively [25]. The transform matrix is calculated according to Equation (3). To avoid the singularity problem of the Euler angles at about , can be calculated based on the quaternions according to Equation (4). Step 2: - is transformed into - by the yaw-pitch-roll rotation order with rotations of , and , respectively. The transform matrix is calculated according to Equation (5). Similar to Step 1, can be calculated based on the quaternions according to Equation (6): Step 3: - is transformed into - by the pitch-yaw rotation order with rotations of and . The transform matrix is: To summarize, we can obtain the transformation matrix from - to - a:s Step 4: According to Equation (8), we can calculate the pitch angle using [25]: and the yaw angle using:

2.3.2. Second Stage of the Proposed Method

In the second stage, we analyze the variation of the pitch angle and the variation of the yaw angle caused by translating - to -. As shown in Figure 3, to calculate the vector , we first need to calculate the vector and the vector in frame -. The steps of calculating in - are as follows. Step 1: Calculate the radius of curvature in the prime vertical of the Earth and the radius of curvature in the meridian of the Earth at by: and calculate the radius of curvature in the prime vertical of the Earth and the radius of curvature in the meridian of the Earth at by: where 6,378,140 m and are the lengths of the Earth’s long and short half-axles, respectively. Step 2: Calculate the coordinates of in the Earth-centered frame - using: and the coordinates of in - using: Step 3: Calculate the transformation matrix from the Earth-centered frame - to the local navigation frame - at by: Step 4: Calculate the transformation matrix from the local navigation frame at to - by: From Step 1 to Step 4, can be obtained by: The vector in frame - is calculated as follows. Step 1: Calculate the radius of curvature in the prime vertical of the Earth and the radius of curvature in the meridian of the Earth at the object position by: Step 2: Calculate the coordinates of in the Earth-centered frame by: Step 3: Calculate the distance between the seeker and the object at by: Step 4: Since the 3-2 order pitch angle and yaw angle of the object in - are and , respectively, is obtained by: Step 5: The vector is calculated using: Let the three components of be , and ; then, the pitch angle and the yaw angle of the object at time are finally determined as [25]: and:

3. Numerical Simulation Results

3.1. Numerical Simulation Setups

We carry out two simulation experiments using MATLAB to verify and evaluate the proposed method. The setups of the simulations are as follows: the measurement period of the seeker is 50 ms; the linear FOV of the seeker is ; and the FOV of the seeker is . In the simulations, the seeker performs a sinusoidal-like motion to place the object at different positions within the seeker’s FOV. As shown in Table 1, the object is in the linear FOV in the first frame, and the pitch angle and yaw angle are and , respectively. The object exits the linear FOV in the second frame (0.05 s) and stays in the nonlinear FOV from 0.05 s to 0.90 s. Then, the object leaves the FOV of the seeker at 0.95 s, re-enters the nonlinear FOV at 2.80 s and stays in the nonlinear FOV until 3.65 s. Next, it enters the linear FOV again and stays in the linear FOV until 5.05 s. Subsequently, the object again exits the linear FOV at 5.10 s and stays in the nonlinear FOV until 6.25 s. The object then stays outside the FOV between 6.30 s and 7.40 s before entering the nonlinear FOV at 7.45 s. In the simulations, we calculate the pitch angle and the yaw angle via the proposed method when the object exits the linear FOV of the seeker and evaluate the method’s performance by comparing it to the ground truth.
Table 1

Relationship between the object position and the seeker FOV.

Time t (s)00.05–0.900.95–2.752.80–3.653.7–5.055.10–6.256.30–7.407.45–8.5
Position relative to FOV LinearNonlinearOutsideNonlinearLinearNonlinearOutsideNonlinear

3.2. Numerical Simulation Results

The purpose of the first numerical simulation is to verify the correctness of the proposed method. In the numerical simulation, the GPS and INS data do not contain errors. The results are shown in Figure 4. Figure 4a shows the pitch angle results, while Figure 4b shows the yaw angle results. The blue `∘’ represents the ground truth, the red `+’ the result of the proposed method and the green `*’ the error between the proposed algorithm and the ground truth. It can be seen that both the pitch angle errors and the yaw angle errors are very close to zero throughout the simulation. To be more precise, the absolute values of both the pitch angle errors and the yaw angle errors are less then , which are caused by the numerical truncation of the simulation software. Therefore, this simulation result proves the correctness of the proposed method.
Figure 4

Results of the first numerical simulation. (a) The pitch angle results; (b) The yaw angle results.

The purpose of the second numerical simulation is to evaluate the angle measurement accuracy of the proposed method when the object is outside the linear FOV. In the numerical simulation, both the GPS data and INS data contain errors. The error in terms of the GPS latitude, longitude and height is 10 m. To make this numerical simulation more challenging, we assume that the INS uses a low-precision MEMS gyroscope [26,27] and that the angle drift ratio is h. In addition, we assume that the INS has been working for 60 s after the initial alignment. Thus, the initial attitude error of the INS is . The attitude error of the INS during the numerical simulation is represented by the green curve in Figure 5c. The simulation results are shown in Figure 5: Figure 5a shows the pitch angle results; Figure 5b shows the yaw angle results; and Figure 5c shows the error between the proposed method and the ground truth. In Figure 5a,b, the blue curve represents the ground truth, while the red curve represents the result of the proposed method. In Figure 5c, the red `+’ represents the pitch angle error, and the blue `∘’ represents the yaw angle error. It can be seen that as time progresses, the object enters the nonlinear FOV or comes out of the FOV, and the angular measurement error of this method increases with the increase of the attitude error of the INS. Specifically, within the 3.6 s when the object leaves the linear FOV for the first time, as the attitude error of the INS increases to , the absolute value of the pitch angle error of the proposed method increases from 0 to , and the absolute value of the yaw angle error increases from 0 to . Furthermore, we can conclude that under the above GPS and INS error conditions, this method can ensure that the angular measurement error is less than in the 6.5 s when the object is outside the linear FOV.
Figure 5

Results of the second numerical simulation. (a) The pitch angle results; (b) The yaw angle results; (c) The error between the proposed method and the ground truth.

From the theoretical derivation and simulation process, it can be seen that when the object is outside the linear FOV, the angle measurement accuracy of the method increases as the GPS accuracy and INS accuracy increase and as time reduces. In practice, the accuracy of the INS is higher than h, and the amount of time the object spends outside the linear FOV does not exceed 3.6 s. Thus, the proposed method can achieve better angle measurement performance than exhibited during the numerical simulation.

4. Conclusions

To solve the problem in which a strapdown semi-active laser seeker cannot measure the angles of objects outside the linear FOV, we make full use of GPS and INS data and propose an angle measurement method based on information fusion. When an object is within the nonlinear FOV or outside the FOV, the pitch angle and the yaw angle of the object can be calculated via a fusion of the last valid angles measured by the seeker and the corresponding GPS and INS data. The numerical simulation results show that the proposed method can tolerate a certain amount of GPS and INS errors and ensure the angular measurement error is less than in the 6.5 s when the object is outside the linear FOV. In general, the proposed method is simple, accurate and effective for angle measurement of objects outside the linear FOV of a strapdown semi-active laser seeker.
  3 in total

1.  Analysis of the Effects of Thermal Environment on Optical Systems for Navigation Guidance and Control in Supersonic Aircraft Based on Empirical Equations.

Authors:  Xuemin Cheng; Yikang Yang; Qun Hao
Journal:  Sensors (Basel)       Date:  2016-10-17       Impact factor: 3.576

2.  MEMS IMU Error Mitigation Using Rotation Modulation Technique.

Authors:  Shuang Du; Wei Sun; Yang Gao
Journal:  Sensors (Basel)       Date:  2016-11-29       Impact factor: 3.576

3.  GPS/MEMS INS data fusion and map matching in urban areas.

Authors:  Hone-Jay Chu; Guang-Je Tsai; Kai-Wei Chiang; Thanh-Trung Duong
Journal:  Sensors (Basel)       Date:  2013-08-23       Impact factor: 3.576

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.