Transcript
UPTEC F15 043
Examensarbete 30 hp Juni 2015
UAV Pose Estimation using Sensor Fusion of Inertial, Sonar and Satellite Signals Kenneth Gustavsson
Abstract UAV Pose Estimation using Sensor Fusion of Inertial, Sonar and Satellite Signals Kenneth Gustavsson
Teknisk- naturvetenskaplig fakultet UTH-enheten Besöksadress: Ångströmlaboratoriet Lägerhyddsvägen 1 Hus 4, Plan 0 Postadress: Box 536 751 21 Uppsala Telefon: 018 – 471 30 03 Telefax: 018 – 471 30 00 Hemsida: http://www.teknat.uu.se/student
In this thesis a system for pose estimation of a quadcopter is developed. The implemented approach is to use a complementary Kalman filter to combine the information from an IMU and sonar sensors on-board the quadcopter with GPS. This approach does not require a dynamic model of the quadcopter, therefore modifications of the current quadcopter or a change to another one is facilitated. Experiments indicate that the system provides accurate pose estimates and are robust in the sense that it can handle loss of GPS signals during shorter time periods. Provided that further work is spent on testing and improving the stability of the estimation, especially the heading estimation, the system may be used to enable autonomous navigation of a quadcopter. The system is implemented using ROS (Robot Operating System) and tested on the low-cost quadcopter Parrot AR.Drone 2.0.
Handledare: Winston Garcia-Gabin Ämnesgranskare: Dave Zachariah Examinator: Tomas Nyberg ISSN: 1401-5757, UPTEC F15 043
Populärvetenskaplig sammanfattning En quadcopter kan beskrivas som en helikopter med fyra rotorer och är en typ av obemannade flygfarkoster som idag kan göras både små och smidiga. Dessa egenskaper samt att de kan utrustas med olika typer av sensorer och kameror gör dem lämpliga för diverse uppdrag, allt från att leverera paket till att användas vid räddningsinsatser. Några av fördelarna med att använda quadcoptrar är att de kan komma åt platser och utrymmen som markburna robotar inte kommer åt. Även att de kan användas i miljöer där en människa riskerar att skadas, till exempel byggnader som riskerar att kollapsa eller är fyllda av giftiga gaser eller höga höjder. För att kunna navigera i dessa miljöer, utom synhåll för en pilot, måste man använda tekniker som ger information om bland annat position och riktning för quadcoptern. För positionsbestämning kan GPS användas. Med hjälp av ett antal satelliter i omloppsbana kring jorden kan en position för användaren bestämmas. Detta är en teknik som fungerar över hela jorden och i alla väder men endast utomhus. I stadsmiljö och andra mindre öppna ytor är GPS inte lika användbart eftersom positionen för användaren inte kan bestämmas lika noggrant. Som komplement till GPS kan sensorer ombord på quadcoptern som mäter dess rörelser användas, så kallade accelerometrar och gyroskop. Genom att mäta hur quadcoptern accelererar och roterar kan man över tid beräkna dess hastighet, position och riktning. Fördelen med accelerometrar och gyroskop ombord gör att man inte behöver förlita sig på något externt system så som satelliter och de kan därmed användas även i de fall då GPS inte är tillförlitligt. En nackdel är dock att dessa sensorer inte mäter exakt rätt hela tiden och därmed går det, precis som med GPS, inte att enbart lita på den information som de ger. Detta arbete fokuserar på att kombinera information från GPS, accelerometrar och gyroskop för att bestämma just position och rikting för en quadcopter. Trots att dessa sensorer inte är ideala var för sig går det att utnyttja styrkorna hos vardera sensor för att de tillsammans ska ge bättre och mer tillförlitlig information om var quadcoptern är och i vilken rikting den är på väg. Resultatet av arbetet är ett datorprogram som tar emot data från quadcopterns sensorer, kombinerar datan med s.k sensor fusion och beräknar dess position och riktning kontinuerligt medan den flyger. I framtiden kan detta leda till autonoma obemannade farkoster som kan navigera och utföra olika uppdrag helt eller delvis på egen hand. De kan underlätta och effektivisera sysslor som idag är tidsödande, svåra eller farliga för oss människor att utföra.
iv
Acknowledgements First of all, I would like to thank my supervisor at ABB Corporate Research, Winston Garcia-Gabin for his encouragement and support during this work. I would also like to thank ABB for giving me the opportunity to perform my thesis work at an inspiring and welcoming work place. Thanks also to my subject reviewer at Uppsala University, Dave Zachariah for constantly providing me with help, feedback and most of all knowledge. The time at ABB would not have been half as fun if it were not for the group of thesis workers that I have been sharing office space with during the spring. Filip, Olov, Luca, Miguel, Sara, Therese, Angelica and of course Mikael have all hosted great dinners, provided many laughs, a lot of random enjoyments and discussions throughout our time together at ABB. Furthermore, my family are entitled to a great thank for their support and understanding. Finally, I wish to express my deepest gratitude and love to Lina for her endless support and care. Kenneth Gustavsson Uppsala, June 2015
vi
Contents Acknowledgments
v
List of figures
viii
List of tables
ix
Abbreviations and symbols xiii Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii List of symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii 1 Introduction 1.1 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Outline of thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 1 2 3
2 Navigation 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Reference frames for navigation . . . . . . . . . . . . . . . . . . 2.2.1 Inertial frame . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 ECEF frame . . . . . . . . . . . . . . . . . . . . . . . . 2.2.3 Local navigation frame . . . . . . . . . . . . . . . . . . . 2.2.4 The body frame . . . . . . . . . . . . . . . . . . . . . . 2.2.5 Attitude representations and coordinate frame rotations 2.3 Systems and sensors for navigation purposes . . . . . . . . . . . 2.3.1 GPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 SONAR . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 IMU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 INS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Inertial navigation system equations . . . . . . . . . . . 2.4.2 INS error equations . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
5 . 5 . 5 . 5 . 5 . 6 . 7 . 7 . 8 . 8 . 9 . 9 . 9 . 10 . 10
3 Sensor Fusion 3.1 Kalman filter . . . . . . . . . . . . . . . . . . . 3.2 The complementary filter . . . . . . . . . . . . 3.2.1 Feedback and feedforward configuration 3.3 Measurement model . . . . . . . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
13 13 14 15 16
4 Experimental Platform and Implementation 4.1 The Parrot AR.Drone 2.0 . . . . . . . . . . . 4.1.1 The sensors of the AR.Drone 2.0 . . . 4.1.2 Control and communication . . . . . . 4.2 Robot Operating System . . . . . . . . . . . . 4.2.1 ROS driver for the Parrot Ar.Drone .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
17 17 18 18 18 19
vii
. . . . .
viii
CONTENTS 4.3 4.4
System overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Sensor fusion implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 4.4.1 Choice of Q and R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5 Experiments and Results 5.1 General setup of the experiments 5.1.1 Initial conditions . . . . . 5.1.2 Accuracy measurements . 5.2 Experiment 1, a rectangular path 5.2.1 Setup of test 1 . . . . . . 5.2.2 Results of test 1 . . . . . 5.3 Experiment 2, a circular path . . 5.3.1 Setup of test 2 . . . . . . 5.3.2 Results of test 2 . . . . . 5.4 Experiment 3, a robustness test . 5.4.1 Setup and results of test 3 5.5 Summary and discussion . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
23 23 24 24 25 25 25 25 25 26 28 28 29
6 Concluding Remarks 37 6.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 6.2 Further work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 A Skew-symmetrical Matrices
41
B Transformations between Reference Frames 43 B.1 Transformation from geodetic to rectangular coordinates in the ECEF Frame 43 B.2 Transformation from rectangular coordinates in the ECEF Frame to the local navigaton frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 C Confidence ellipses
45
List of Figures 2.1 2.2 2.3 2.4
Three different navigation frames The {b} frame . . . . . . . . . . . The {n} and {b} frames. . . . . . A block diagram of an INS . . .
. . . .
. . . .
3.1 3.2
Complementary filter in feedback configuration. . . . . . . . . . . . . . . . . . 15 Complementary filter in feedforward configuration. . . . . . . . . . . . . . . . 16
4.1 4.2 4.3 4.4 4.5
The Parrot AR.Drone 2.0 . . . . . . . ROS graph-architecture . . . . . . . . Overview of the implemented system. Implemented Complementary filter. . . Results from IMU test. . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
17 19 20 20 22
5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 5.10 5.11 5.12 5.13 5.14 5.15 5.16 5.17 5.18
Path of experiment nr 1. . . . . . . . . . . . . . . . . . . . . . . . . Path of experiment nr 2. . . . . . . . . . . . . . . . . . . . . . . . . Estimated position in 3D, test nr 1. . . . . . . . . . . . . . . . . . Estimated and measured position in 2D, test nr 1. . . . . . . . . . Estimated altitude vs time, test nr 1. . . . . . . . . . . . . . . . . . Estimated attitude vs time, test nr 1. . . . . . . . . . . . . . . . . Estimated position in 3D, test nr 2. . . . . . . . . . . . . . . . . . Estimated and measured position in 2D, test nr 2. . . . . . . . . . Estimated altitude vs time, test nr 2. . . . . . . . . . . . . . . . . . Estimated attitude vs time, test nr 2. . . . . . . . . . . . . . . . . Position estimates with GPS loss, rectangular path. . . . . . . . . Position estimates with few GPS measurements, rectangular path. Position estimates with GPS loss, circular path. . . . . . . . . . . . Position estimates with few GPS measurements, circular path. . . Attitude estimates with GPS loss, rectangular path. . . . . . . . . Attitude estimates with few GPS measurements, rectangular path. Attitude estimates with GPS loss, circular path. . . . . . . . . . . Attitude estimates with few GPS measurements, circular path. . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
23 24 25 26 26 27 27 28 28 29 30 30 31 31 32 33 34 35
ix
. . . .
. . . .
. . . . .
. . . .
. . . . .
. . . .
. . . . .
. . . .
. . . . .
. . . .
. . . . .
. . . .
. . . . .
. . . .
. . . . .
. . . .
. . . . .
. . . .
. . . . .
. . . .
. . . . .
. . . .
. . . . .
. . . .
. . . . .
. . . .
. . . . .
. . . .
. . . . .
. . . .
. . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
6 6 7 9
x
LIST OF FIGURES
List of Tables 4.1
Sensor noise parameters. The values are given on a per sample basis. . . . . . 22
5.1 5.2
Initial values for the error covariance matrix P . . . . . . . . . . . . . . . . . . 24 Error statistics of estimations. . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
xi
xii
LIST OF TABLES
Abbreviations and symbols Abbreviations ECEF EKF ESKF
Earth-centered earth-fixed. extended Kalman filter. error state Kalman filter.
GPS
Global Positioning System.
IMU INS
Inertial Measurement Unit. Inertial Navigation System.
MAV MEMS MSE
Micro Unmanned Aerial Vehicle. Microelectromechanical Systems. Mean square error.
RMSE ROS
root mean square error. Robot Operating System.
SINS SLAM
Strapdown Inertial Navigation System. Simultaneous Localization And Mapping.
UAV
Unmanned Aerial Vehicle.
List of symbols x X x> X> X 1 diag (x), diag (X)
column vector. matrix. transpose of vector x. transpose of matrix X. inverse of matrix X. diagonal matrix with the elements of x or X as the main diagonal.
xiii
xiv
List of symbols
Chapter 1
Introduction In recent years, remote controlled and autonomous MAVs, especially quadcopters, have become immensely popular among both researchers and hobbyist. For a hobbyist, they may be high-tech toys for flying around and shooting videos while for researchers, they provide a low-cost platform for embedded control applications and research within the areas of sensor fusion and autonomous systems. Quadcopters are very agile and maneuverable; they have the ability to take-off and land vertically, move in any direction and hover at a fix position. They can also be made considerably smaller than conventional helicopters since they use 4 small rotors instead of a big one to obtain lift. These abilities make quadcopters suited many applications, for example reconnaissance missions in areas dangerous for humans and inaccessible by other aerial vehicles and robots, like collapsed buildings and hazardous environments. In order to know their position and navigate in an unknown environment quadcopters need to rely on sensors that provide information about position, velocity and orientation. Sensors may provide this either directly, like a GPS receiver that provides position, or indirectly, like inertial sensors providing velocity and orientation through integration. A lot of research is also focusing on using onboard cameras in navigation purposes. Regardless of which combination of sensors that are used, the key to a good system for navigation is to combine the data from the different sensors in a favorable way. This is referred to as sensor fusion, i.e. fuse data or information from different sources so that the resulting information is better than what it would be if the sensors were considered individually. The idea of this thesis is to look into the area of sensor fusion applied to quadcopters for navigation purposes. More specifically to fuse the information from inertial, satellite and sonar sensors to provide position and orientation estimates of low cost quadcopter. Combining GPS and inertial sensors is preferable since they have complementary characteristics and should therefore yield a robust and reliable system.
1.1
Related work
The problem of estimating the position and orientation of a aerial vehicle has been a topic of research for many years and the literature provides different solutions to the problem. The most frequent approach however, is to fuse the information from inertial sensors and one or more complementary sensors using variants of the Kalman filter. A general choice of filter version and complementary sensors does however not exist. In general, the fusion problem is solved in two different ways. The first solution, a nonlinear version of the Kalman filter, (the extended or unscented Kalman filter) estimating the full navigation state (direct filtering) using a dynamic model of the system. The second solution, to let inertial sensors provide a nominal navigation solution and then estimate the error (indirect filtering) of this 1
1.2 Objectives
2
solution with a nonlinear, or linear Kalman filter that are fed with information from the complementary sensors. The general idea and motivations for the second approach are described in [1]. It is stated that the second mentioned solution is preferred over the first one since cumbersome and approximate dynamic modeling are circumvented and that an error model is better suited for estimation with a Kalman filter than a full dynamic model. An integration of GPS and Inertial sensors based on indirect filtering with a linear Kalman filter is successfully applied in [2], [3], [4] and [5]. Simulations made in [3] and [5] show that a loosely coupled integration. i.e an integration where the GPS measurements is seen as a sensor yk = pk for the position pk , of GPS and inertial sensors provides a more reliable and accurate navigation solution than each sensor individually. This is also concluded in [4] where the approach is applied on a fixed wing UAV. The same approach is further developed in [2] where a magnetometer is incorporated to provide reliable heading estimate for a helicopter UAV during hover. [6] and [7] applies the first mentioned approach. In both these works, the use of a monocular camera onboard a quadcopter and visual SLAM is evaluated with the simple motivation that it provides a solution to the positioning problem in GPS-denied environment. This is important since it opens up for usage in indoor areas. The conclusions drawn in [6] are that the accuracy of the pose estimation depends on the volume of the operational area; a larger volume results in lower accuracy. It is further concluded that their approach impose requirements on the environment, a sufficient amount of structure/texture in a relative close distance (2-5 m) from the camera is needed. On the other hand, the use of an approximate dynamic model of the quadcopter performs very well in practice despite many physical assumptions. In [7] it is concluded that a monocular camera together with inertial sensors can provide a base for reliable pose estimates in GPS-denied environments. The combination of a monocular camera and inertial sensors is also used in [8]. Instead of using Visual SLAM, markers called tags at known positions that are detected by the camera regularly provides position information within reasonable time intervals. A Sigma-Point Kalman filter in feedback form is used to correct the nominal inertial navigation solution given by the inertial sensors. The results indicates that this is a possible solution for indoor navigation. However, the approach requires an infrastructure of tags and needs to be further evaluated for longer trajectories.
1.2
Objectives
The main objective of this thesis was to develop a system able to provide position and attitude information of a quadcopter. This has been done by implementing a sensor fusion algorithm that combines information from the GPS receiver, IMU and ultrasonic range sensor of the AR.Drone 2.0 quadcopter. The implementation was done in c++ using ROS, a tool used for the development of robotic applications. The chosen sensor fusion implementation is a so called complementary filter where a nominal navigation solution is provided via the IMU measurements and corrected with the help of a Kalman filter and information from the GPS and ultrasonic range sensors. This approach was chosen since it makes possible to use the same filter for another quadcopter, its performance mainly depends of the sensors and not a dynamic model of the quadcopter.
3
Introduction
1.3
Outline of thesis
The content of this thesis is organized as follows: • Chapter 2 introduces useful navigation theory and commonly used sensors for navigation. The chapter ends with a presentation of the INS equations and how sensor deficiencies introduce the need for sensor fusion. • In chapter 3, it is shown how sensor fusion can be used to combine the information from inertial sensors and other sensors like GPS to provide a reliable high rate navigation system. • A description of the experimental platform and details on the implementation of the theory from previous chapters can be found in chapter 4. • How the experiments were performed and the results of them are presented in chapter 5. The chapter also contains a summary and discussion of the results. • In the end of the thesis, in chapter 6, conclusions based on the work are drawn. There are also some suggestions for future work.
1.3 Outline of thesis
4
Chapter 2
Navigation 2.1
Introduction
Navigation systems in different forms are and have been present in our daily lives throughout the history. They have enabled us to discover new parts of the world, the universe and find the way to the supermarket. Two of today’s standard techniques for navigation are INS and GPS. GPS is a satellite-based positioning system continuously providing users with position information all around the globe, in all kinds of weathers. An INS is mounted to a moving object and uses measurements from inertial sensors, i.e accelerometers and gyroscopes and navigation dynamics to compute position, velocity and orientation of an moving object. INS is mounted on the moving object and do not rely on external signals for the navigation. INS and GPS are preferably used together due to their complementary characteristics which will be explained later in the chapter. Sonar sensors are a type of ranging sensor. An UAV can for example use sonar to detect obstacles or measure the distance to the ground. The rest of this chapter is devoted to presenting different coordinate frames used in navigation and more details about inertial sensors, GPS and sonar sensors. In the end of the chapter, the INS equations and how sensor deficiencies affect them are shown. This will show the need of using sensor fusion to improve the performance of inertial sensors by combining them with information from other sensors such as GPS and sonar.
2.2
Reference frames for navigation
This section describes four different coordinate frames used in navigation of aerial vehicles. The four coordinate frames are the inertial frame, the ECEF frame, the local navigation frame and the body frame; they are shown in figures 2.1 and 2.3.
2.2.1
Inertial frame
The inertial frame in this case is a frame with origin at the earth’s center of mass. The z-axis is aligned with the earth’s spin axis and the x- and y-axes are in the equatorial plane so that a right-handed coordinate system is completed. This system does not rotate with earth and is from here denoted as the {i}-frame.
2.2.2
ECEF frame
The ECEF frame is similar to the {i}-frame with the difference that it rotates with earth. Hence its z-axis always points from the earth’s center of mass up through the true north pole. The x-axis passes through the intersection of the prime meridian and the equator and the y-axis completes a right-handed coordinate system. Using these axes for the ECEF 5
2.2 Reference frames for navigation
6
frame is usually referred to as the ECEF rectangular coordinate system [9] but the ECEF frame can also be described with the ECEF geodetic coordinates. These coordinates are referred to as longitude, latitude and altitude ( , ', h) and are used to describe the location of a point relative to the WGS-84 defined ellipsoid [9].
2.2.3
Local navigation frame
The local navigation frame, shown in figure 2.3 has x- and y-axes tangential to the earth’s surface, with the x-axis pointing towards north, the y axis to the west. The z-axis is pointing upwards to complete a right-hand system. For applications where the navigation is done in a local and bounded space this system is often used and the origin is placed at the starting point of the vehicle.
ze zi
prime meridian
xn
zn
yn
xi
xe
ye yi
equatorial plane
Figure 2.1: Three different navigation frames: Inertial frame, {i}, Earth frame,{e} and Local navigation frame, {n}.
zb xb Yaw Roll
yb Pitch Figure 2.2: The {b} frame. xb points towards the front of the vehicle.
7
Navigation
2.2.4
The body frame
When having body fixed sensors measuring quantities relative the vehicle’s body it is convenient to have a body fixed frame {b}, shown in figure 2.2 The vehicles center of mass is picked as origin, the z-axis points to the top of the vehicle, the x-axis to the front and the y-axis completes a right-handed system (i.e. points to the left side of the vehicle). The rotations around the axes are referred to as roll, pitch and yaw for the x,y and z-axis respectively, denotes as , ✓ and . These should not be mistaken for the Euler angles that will me mentioned later and usually denoted in the same way.
zb yb
xb
Up, zn North, xn
West, y n Figure 2.3: The local navigation frame {n} and body fixed frame {b}.
2.2.5
Attitude representations and coordinate frame rotations
Consider a vehicle measuring motions relative its own body-fixed frame and an external system measuring the position, velocity etc of the same vehicle in the {e} frame. Additionally, the actual navigation is performed relative the local navigation frame {n}. This show the need for tools for describing the orientation of a rigid body with respect to a reference frame and how to rotate reference frames. Three of the most common ways to represent orientations and rotations in R3 are Euler angles, direction-cosine matrices and quaternions. Euler angles. With Euler angles, the rotation from one frame to another is described with three successive planar rotations about different axes. If the rotation of {b} to {n} is > parameterized by the three Euler angles ✓ = [ , ✓, ] 2 SO(3)1 the complete rotation of {b} to {n} is achieved by [10] 1. rotate through angle
about z b .
2. rotate through angle ✓ about the new y-axis. 3. rotate through angles
about the new x-axis.
Note that the Euler angles , ✓, are not the same as the angles used to describe the rotation of the axes of the {b}-frame. Direction-cosine matrix. A direction-cosine matrix Rab (✓) 2 SO(3), rotating frame {a} to {b} is a 3 ⇥ 3 matrix enabling a vector in frame {a}, v a to be rotated into frame {b} with the vector-matrix multiplication v b = Rab v a . The rotation matrix from frame {b} to frame {n} can be constructed by multiplying the rotation matrices for each of the above 1 SO(3)
is the group of all rotations about the origin of R3
2.3 Systems and sensors for navigation purposes
8
mentioned Euler angles rotations. First the three separate rotations may expressed as the rotation matrices: 2 3 cos( ) sin( ) 0 R1 = 4 sin( ) cos( ) 05 , (2.1) 0 0 1 2 3 cos(✓) 0 sin(✓) 1 0 5 R2 = 4 0 (2.2) sin(✓) 0 cos(✓)
and
R3
=
2
1 40 0
0 cos( ) sin( )
3 0 sin( ) 5 cos( )
(2.3)
where R1 is the rotation around the axis z b , R2 the rotation around the new y-axis and R3 the rotation that finally aligns frame {b} with frame {n}. The full rotation of {b} to {n} is then given as the direction cosine matrix 2
Rbn (✓) = R1 R2 R3
c( )c(✓) = 4 s( )c( ) + c( )s(✓)s( ) s( )s( ) + c( )s(✓)c( )
s( )c(✓) c( )c( ) + s( )s(✓)s( ) c( )s( ) + s( )s(✓)c( )
3 s(✓) c(✓)s( )5 c(✓)c( )
(2.4)
where s(x) and c(x) are used to denote sin(x) and cos(x) respectively.
>
Quaternions. A quaternion is basically a four-element vector q = [q1 , q2 , q3 , q4] where the elements are real numbers. Using quaternions for rotation in R3 are less intuitive than Euler angles and direction cosines but provide a more stable and effective attitude representation; they are therefore often used as internal states for orientation representation in computer implementation of navigation applications [11]. Even though quaternions were used in the implementation part of this work, understanding them is not necessary for the rest of this thesis and will therefore not be further explained. Now it has been shown how the orientation or attitude of a vehicle can be described, also different ways to rotate the {b} frame into the {n} frame. This will later be used when the inertial navigation equations are given in section 2.4.1. For transformations from geodetic and rectangular coordinates of the {e} frame to the local navigation frame, see Appendix B.
2.3 2.3.1
Systems and sensors for navigation purposes GPS
GPS is a satellite-based navigation system with worldwide coverage, available in all weathers, day or night. A set of satellites orbiting around the earth ensures that a GPS receiver anywhere on the globe can be located, provided that the receiver is within line-of-sight of at least four of the satellites. The satellites broadcast messages containing the precise time for when a message was sent, identity of the satellite and its current position. A GPS receiver determines the distance to a single satellite by comparing the arrival and sending time of a message and knowing that the signals travels with the speed of light. Knowing the distance to at least four satellites and their positions relative earth, the receiver determines its position in the ECEF coordinate system ( , ', h) using triangulation. GPS is considered to be a reliable system for outdoor localization. However, the GPS data is generally too noisy and provided with too low sampling rate to give any information about position derivatives, especially for high-dynamic applications like quadcopters. Other
9
Navigation
Figure 2.4: Block diagram of an Inertial Navigation System. The navigation dynamics described by (2.7) relates the inertial measurements to position, velocity and attitude
drawbacks are that it suffers from short-term signal loss that can be due to blockage, jamming or interference.
2.3.2
SONAR
A SONAR (SOund Navigation and Ranging) sensor uses either infrasonic or ultrasonic sound to detect and measure the range to an obstacle. An active ultrasonic sonar emits pulse s(t) at a certain direction ✓, and then detects eventual echoes y(t) to determine the range to an obstacle. The received signal y(t) is related to the range R to the obstacle as y(t) = h(t) ⇤ s (t 2R/c), where ⇤ denotes convolution, h(t) is dynamic distortion of the signal (impulse response), c the speed of sound in air and t the time of flight for the signal [11]. Standard ultrasonic rangefinders used for robot applications are able to provide tenth of distance measurements per second. Their angular resolution is usually one or two orders of magnitude worse than lasers ranging sensors, which have an sub-degree angular resolution. Nonetheless, sonar sensors are still appealing thanks to lower prices and power consumption [12].
2.3.3
IMU
A 6 DoF IMU consist of a three axis accelerometer and a three axis gyroscope. These are inertial sensors and measure the specific force and angular velocity respectively of the IMU. The axes of both the gyroscope and accelerometer are ideally orthogonal and aligned with the axes of the platform they are mounted to. Today, MEMS sensors are the most common type of IMU thanks to their low price and small size, which makes them useful in all kinds of devices. The error sources for accelerometers and gyroscopes can be divided into random errors (measurement noise) and systematic errors (offset or bias, scale factors and misalignment errors) [9]. If the scale factors and misalignment errors are neglected, assuming they are compensated for with calibration, the output of the sensors can be modeled as f˜b ˜b !
= =
f b + f b + nf ! b + ! b + n!
(2.5)
where f b and ! b 2 R3 are the true specific force and angular velocity respectively. n represents Gaussian white measurement noise and f b , ! b slow varying biases [8], [2]. An IMU usually provides fast rate (100-1000 Hz) information about linear accelerations and angular velocities. Even though position, linear velocity and orientation information can be obtained through integration of measurements, the earlier mentioned errors and the integration of them results in useless solutions within seconds for consumer grade MEMS sensors.
2.4
INS
As mentioned in the introduction to this chapter, an INS is a system that provides navigation information using inertial sensors and navigation dynamics. A block diagram of an INS is
2.4 INS
10
shown in figure 2.4. The specific INS depicted in figure 2.4 is a SINS, a system were the inertial sensors are strapped directly onto vehicle frame. The measurements of the sensors then describes motions relative the body frame. Since errors in the inertial measurements may result in a useless navigation solution within seconds, external aiding sensors can be used to periodically correct the navigation information from the INS. A common choice of external aiding is GPS. This is because an INS and GPS together have the ability to provide information about position, position derivatives and orientation with a higher sampling rate and good accuracy [11].
2.4.1
Inertial navigation system equations
As mentioned in the previous section, when using a strap-down INS on a flying (or any) vehicle, the IMU will measure the specific force f b and angular velocity ! b relative the body frame {b} of the vehicle. When the navigation is performed in the local navigation frame {n}, the IMU measurements need to be rotated into the {n} frame. Let the position and velocity of an aerial vehicle in {n} frame be denoted as pn and v n . Furthermore, let the T attitude of the aircraft be parameterized by the Euler angles ✓ = [ , ✓, ] or the rotation n b b matrix Rb (✓). If the specific force f and angular velocity ! given by the sensors are ideal the evolution of pn , v n and Rbn (✓) is given by the INS equations p˙ n v˙ n R˙ n
vn n Rbn f b + g n 2 [!ie ] vn ⇥ b⇤ ⇥ b⇥ n ⇤ n n Rb ! ⇥ Rb Rn !ie ⇥
= = =
b
(2.6)
n where !ie and g n is the local angular velocity of Earth and gravitational acceleration. [a]⇥ is the skew-symmetric matrix representation of the vector cross product. Some of the terms in these state equations might be trivial, like that the derivative of position is equal to the velocity; or how the gravitational acceleration and specific force affects the velocity derivative. Other terms might not be as trivial. For a full derivation of the INS equations, see for example [8].
2.4.2
INS error equations
The INS navigation equations given in (2.6) that describe the evolution of the navigation states assumes ideal IMU measurements. Therefore the sensor errors described in 2.3.3 causes deviations from the actual states. Substituting the ideal values of the IMU measurement in (2.6) with the sensor models from (2.5) gives the equations ˆ˙ n p ˆ˙ n v ˙n b R b
= = =
vˆn b n f b + f + nf + gˆn 2 [! ˆ n ] vˆn R b h ie ⇥ i ⇥ ⇤ n b n ! b + ! + n! bn R bb ! R R n ˆ ie b b ⇥
(2.7)
⇥
which describe the actual propagation of the navigation states due to the sensor errors. What is obtained are just estimates of the actual position, velocity and attitude which contain deviations from the true states. However, if these deviations could be estimated it would be possible to do corrections for them and obtain a solution closer to the true one. b n (Rn ). By Define the deviation states as p , pˆn pn , v , vˆn v n and R( ✓) , R b b taking the difference between (2.6) and (2.7) the following equations for the propagation of the devation states are obtained, p˙ n v˙ n ✓˙ n
= = =
n
hv i b n f˜b R b bn ! R b
⇥ b
bn f b + R b n nf ✓+R b b b n n! . R b
(2.8)
11
Navigation
Where it is assumed that the deviations are sufficiently small [8]. By modeling the slow varying biases of the inertial sensors as Brownian motion, i.e integrated white noise f˙b = w f , !˙ b = w ! they can be incorporated into the system state and a total state vector for the deviations may be written as ⇥ x = ( pn )>
( v n )>
( ✓)>
( f b )>
( ! b )>
⇤>
(2.9)
.
Then, by using a first-order approximation of the derivatives, the propagation of the deviations can be written as pnk+1 = pnk + dtk hvkn i n b n f˜b vk+1 = vkn + dtk R b,k k b n !b ✓k+1 = ✓k dtk R b,k k b b fk+1 = fk + dtk w f ,k b !k+1 = !kb + dtk w !,k
⇥
b n f b + dtk R b n nf ,k ✓k + dtk R b,k k b,k
b n n!,k dtk R b,k
or in a more compact way
(2.10)
(2.11)
xk+1 = Fk xk + Gk wk using the noise vector ⇥ wk = (nf ,k )>
(n!,k )>
(w
f ,k )
>
(w
!,k )
and system matrices
2
and
I3
60 6 3 6 Fk = 60 6 3 40 3 03
dtk I3 I3 03 03 03
h 03 i b n fˆb dtk R b,k k I3 03 03
2
03 bn 6dtk R b,k 6 6 Gk = 6 03 4 0 3 03
⇥
03 03 bn dtk R b,k 03 03
03 bn dtk R
03
3
7 7 7 bn 7 dtk R b,k 7 5 0 03
b,k
03 dtk I3 03
03 03 03 dtk I3 03
⇤ > >
3
dtk I3
3 03 03 7 7 03 7 7. 03 5 dtk I3
By having this description of the deviation state it is possible to use an estimation algorithm to estimate the deviations and do corrections for them. According to the definition of the deviation states the true navigation states pn , v n and Rbn are computed as pn vn Rbn
= = =
pˆn pn n vˆ vn bn R> ( ✓)R b
(2.12) T
T
where, R( ✓) is rotation matrix in (2.4) using ✓ = [ ✓ ] instead of ✓ = [ ✓ ] . The next chapter describes how the deviation states can be estimated using sensor fusion and thereby how an INS together with complementary sensors can be used to provide high rate and reliable navigation information.
2.4 INS
12
Chapter 3
Sensor Fusion This chapter provides some knowledge of sensor fusion and how it can be used to improve the position estimation of an UAV equipped with an IMU and GPS. First by presenting the Kalman filter equations for linear estimation. Then by describing different approaches for fusion of INS and external position aiding, especially the feeback- and feedforward version of the error state Kalman filter. In the end of the chapter a measurement model for using GPS as external position aiding is derived.
3.1
Kalman filter
A general discrete time linear system can be described with the state space model
xk+1 yk
= =
Fk xk + Gk vk , Hk xk + ek ,
k k
0 0
(3.1)
where xk is the state vector and yk the measurement vector. The matrices Fk , Gk and Hk can be time-variant or time-invariant. The process vk is called the process noise while the process ek is called the measurement noise. They are assumed to have the known covariances COV(v) = Q and COV(e) = R. ˆ 1|0 of the state and its covariance P1|0 = COV(x0 ) Assuming that an initial estimate x ˆ k|k at time k, given is known, the Kalman filter then finds the MSE optimal estimate1 x measurement yk up to time k. This is done in a two-step manner. The first step is the ˆ k|k and covariance Pk|k are computed by correcting Measurement update, in which the state x ˆ k|k 1 , Pk|k 1 using a measurement taken at the previous estimates of the same quantities x time k, yk . In the second step, called the Time update the process model and noise properties are exploited to estimate the state and its covariance in the next time step. The details for the discrete time Kalman filter is given by [11] and can be seen in Algorithm 1.
1 Minimizes
ˆ = E||x ˆ MSE(x)
ˆ x||2 = Trace(x
ˆ x)(x
13
x)> = Trace(P )
3.2 The complementary filter
14
Algorithm 1 The Kalman filter ˆ 1|0 = E(x0 ) The Kalman filter can be divided into the following recursions initialized with x ˆ and P1|0 = COV(x0 ): 1. Measurement update. ˆ k|k = x ˆ k|k 1 + Kk ✏k x Pk|k = [I Kk H] Pk|k
1.
2. Time update. ˆ k+1|k = Fk x ˆ k|k x Pk+1|k = Kk Pk|k Fk> + Gk Qk G> k The measurement update in Algorithm 1 is written in a compact way using the following definitions. The innovation ✏k is is defined as ✏k = yk
ˆ k|k Hk x
1,
i.e the difference between the measurement yk and the prediction yˆk|k covariance of the innovation is computed as Sk = HPk|k
1H
>
1
ˆ k|k = Hx
1.
The
+ R.
Knowing the innovation and its covariance, the Kalman gain can then be obtained as Kk = Pk|k
> 1 Hk
(Sk )
1
.
Structuring the steps of the measurement update this way is preferred since repeated computations of the same quantity are avoided. The covariance matrix, P is by its definition symmetric and positive definite. However, in computer implementations of the filter, numerical errors can result in P losing these properties [9]. An often-used approach to avoid the covariance matrix to losing its symmetric property is to resymmetrize it at regular interval with the equation P =
1 P + P> 2
(3.2)
This is a relative computational cheap operation and only affects P if it is not symmetric. If the covariance matrix already is symmetric, it is returned unchanged, since A = A> for a symmetric matrix A. Another way to to ensure that P stays symmetric is to use a square root implementation, this would also ensure positive definiteness of P . However, since square-root implementations are more complex to program and require more computations [9] it may be avoided until found necessary.
3.2
The complementary filter
The complementary filter, also known as the ESKF or the indirect Kalman filter is a filtering approach used in navigation that provides high-rate information about the navigation state, robustness for sensor unavailability and does not rely on dynamic modeling. The general idea is to fuse information from a high-rate but maybe not long-term accurate IMU with measurements from one or more other sensors. These sensors may provide direct measurements of a subset of the navigation state, often position or velocity, with a lower rate than the IMU but are stable over longer time periods.
15
Sensor Fusion
As shown in figure 3.1, the inertial measurements are processed by an INS, an aiding sensor provides extra information, for example position. Whenever a observation from the aiding sensor is available, the observation is predicted from the INS output and the difference between the predicted observation and actual observation is fed to a Kalman filter which then estimate the INS deviation state. The estimated INS deviation state is then used to correct the nominal INS solution. It is because of that the Kalman filter operates on the deviation state instead of the full state that it is often referred to the error state Kalman filter or the indirect Kalman filter.
IMU
INS
Filter
h( ) +
Aiding Sensor
Figure 3.1: Complementary filter in feedback configuration. Filter estimates the INS deviations with help from aiding sensor. h() predicts the output of the aiding sensor from the nominal INS estimate. The estimated deviations are used to correct the INS.
As mention in section 1.1 in the beginning of the report there exist several benefits of using a complementary filter and estimate the INS deviations instead of using a direct filter to operate on the full state [9], [1]. Some of them being that: • A dynamic model is not needed. Hence cumbersome work that may have to be redone when the vehicle is modified is avoided. • The INS can follow rapid changes in the navigation state and the Kalman filter can be designed to have low bandwidth since it only estimates the slow varying deviations. • The low frequency deviations are better approximated by a linear process than the high frequency full state.
3.2.1
Feedback and feedforward configuration
A complementary filter can either be implemented in feedforward or feedback form [1], [9]. The differences of the two forms are explained in [1] and the use of the feedback form is motivated. The difference between the two choices is how the estimated errors are used to correct the nominal INS estimation. Figure 3.2 shows the feedforward version, where the error estimates are fed forward to correct current INS estimation. In the feedback version, the estimated errors are not only used to correct the current INS output, but also fed back to the INS to correct the starting point of next iteration, see figure 3.1. Because of this, the next INS iteration will start from the corrected state; this limit the size of the errors. In the feedforward version on the other hand, the errors are free to drift unbounded [1]. It can be shown that in the feedforward configuration of the complementary filter is equal to the Linearized Kalman filter with the INS estimate being the reference trajectory for the linearization. In the same way, it can be shown that the feedback configuration is the same as the EKF where the INS provides the estimation of the nominal trajectory and the filter estimates and propagates the deviations from this nominal trajectory, [11].
3.3 Measurement model
IMU
16
+
INS
-
Filter
h( ) +
Aiding Sensor
Figure 3.2: Complementary filter in feedforward configuration. The estimated deviations are only used to correct the current INS estimate.
3.3
Measurement model
Now a measurement model for use in a complementary filter with based on an INS aided by GPS and sonar will be derived. For the integration of GPS in a complementary filter there exist different methods, they are usually categorized as loose, tight and deep integration [11]. The approaches are distinguished by the flow of information in the system. Roughly speaking, a loose integration requires the least processing of the GPS observables, while a tight integration requires the most [9]. Here, the loose integration is considered, both for the simplicity of the integration and since the used GPS receiver does not provide the necessary information for the other approaches. In a loose integration, the GPS is seen as a sensor that provides measurements of po⇥ ⇤> sition pn,GP S = pn + eGP S , where pn = pnx , pny , pnz is the true position. The noise characteristics are unknown but are assumed to be Gaussian white noise, an assumption also made in [2] and [3]. The position is given in the geodetic ECEF coordinates but can be transformed into the {n} frame using the transformations in appendix B. Altitude measurements pn,Alt = pnz + eAlt are given by two sonar sensors. Together the GPS and altimeter z measurements form pn,GP S,Alt = pn + e where
and
⇥ ⇤ S n,GP S n,Alt > pn,GP S,Alt = pn,GP py pz x ⇥ ⇤ S GP S Alt > e = eGP ey ez x
Furthermore, in a complementary filter the Kalman filter is driven by the difference between the predicted position measurement and the actual position measurement. The INS state contains the position estimate pˆn,IN S = pn + pn which would be the predicted position measurement. The difference between the measurements and predicted measurements is then given by y = pn,GP S,Alt
pˆn,IN S
= (pn + e)
(pn + pn )
=
pn + e
(3.3)
= H x + e. where x is the state vector in (2.9) and H = [ I3 0] 2 R3⇥15 . This linear measurement model y = H x + e and the linear process model of the deviation state from (2.11) can be used with a linear Kalman filter to estimate INS deviations and feed them back to the INS to improve its performance, as shown in figure 3.1.
Chapter 4
Experimental Platform and Implementation This chapter first gives a little more details about the Parrot Ar.Drone and ROS that were used in the implementation of the system. After that an overview of the system is presented before going into the details of the implementation.
4.1
The Parrot AR.Drone 2.0
The Parrot AR.Drone 2.0 is a commercially available quadcopter produced by the French company Parrot. It is a low-cost ('300 e) and of-the-shelf quadcopter, the user only needs to charge the batteries before it is ready to fly. It can be controlled with a smartphone application developed by Parrot that are freely available on different application-stores. Parrot has also released a Software Developing Kit (SDK) called AR.Drone SDK 2.0 to enable the users to develop their own applications for both smartphones and personal computers. The design of the AR.Drone follows the classic design of quadcopters. The four rotors are attached to the ends of the four arms made of carbon fiber. In the middle where the four arms cross, the battery and other hardware are placed inside a foam that dampens vibrations from the motors. The drone can be equipped with both an indoor and an outdoor hull. The indoor hull 4.1a have extra protection for the rotors while the outdoor hull 4.1b is more aerodynamic.
(a) Indoor hull.
(b) Outdoor hull.
Figure 4.1: The Parrot AR.Drone 2.0
The onboard electronics consists of a mother-board, navigation-board, battery, sensors and four brushless motors each controlled by a MIPS AVR processor. The navigation-board serves as an interface between the sensors and mother-board. The processor is a 32 bit ARM8 17
4.2 Robot Operating System
18
Cortex processor running at 1 GHz. The system is running a real-time embedded Linuxbased operating system. The OS handles threads for data acquisition, video processing, state-estimation, closed-loop control and wifi-communications.
4.1.1
The sensors of the AR.Drone 2.0
The AR.Drone has two video cameras. One that is mounted vertically on the motherboard to enable ground speed measurement (using visual odometry) and the other one is mounted horizontally in the front of the body, pointing forward. Furthermore the AR.Drone have a low-cost IMU consisting of a 3-axis accelerometer and a 3-axis gyroscope that provides inertial measurements relative the body frame. The update frequency of the IMU is 200 Hz. The gyroscope can measure rotation rates up to 2000 /s. The inertial sensors are calibrated to compensate for misalignment and scale factor errors. This is done both at factory and during use [13]. An magnetometer with a 6 precision can be used as a compass. Two downward facing sonar sensors provide ground-distance measurements and are used to determine the field of depth of the downward facing camera. The sonars emit ultrasound with a frequency of 40 kHz, can measure distances between 0.2 and 6 m at 25 Hz. There is also an air-pressure sensor with a ±10 Pa precision to enable altitude measurements and stability on heights when the sonars are not enough. An external GPS receiver produced for the Parrot AR.Drone, called Flight Recorder can be mounted and connected with USB on top of the battery. According to Parrot it measures the position with an accuracy of 2 m at 5 Hz. The exact information about which sensors are used is not available from Parrot. However, for the first version of the AR.Drone, much information can be found in [13]. The IMU are updated in the second version, the pressure sensor, GPS receiver and magnetometer were not at all available in the first version.
4.1.2
Control and communication
Easy flying of the AR.Drone is made possible with the help of onboard algorithms for estimation of attitude, velocity and stability control. When the drone is turned it sets up an ad-hoc Wi-Fi network to which the user can connect a smartphone or PC. All the available navigation data can then be received at either 15 Hz or 200 Hz. The available navigation data includes sensor readings, internally estimated states made onboard the drone and highlevel information like if the drone is flying or not [14], [15]. Control of the drone is done by > sending high-level commands for take-off or land; or a control command, u = [ d ⇥d z˙d ˙d ] with desired roll, pitch, vertical velocity and yaw-rate. The internal controller of the drone then uses closed loop control for changing the speed of the rotors. Using Wi-Fi for the data link infers some problems, interference from other Wi-Fi devices is one of them. Problems with delays is mentioned in [6] where it is stated that the delay for receiving data from the quadcopter is ' 130 ms and ' 60 ms for sending control commands.
4.2
Robot Operating System
For the software development, it was chosen to use ROS [16], which is an open-source framework for developing robotic software. It was originally developed by Stanford Artificial Intelligence Laboratory but is nowadays maintained by the Open Source Robotics Foundation. The aim of ROS is to simplify the task of creating robotic software by providing a base of libraries, tools and conventions for doing just this. Users can use this base to develop their own libraries for their specific application or robot. These application specific libraries, usually written in c++ or python are referred to as packages. They can be distributed to other user via the ROS website. Since all user-developed software is based on the same core
19
Experimental Platform and Implementation
components and conventions, sharing and collaborating with other users is simple and a key to the wide-spread user base. During runtime, sets of ROS-based processes are represented in a graph-architecture where each node represents a specific process. Nodes can communicate with each-other using an publish-subscriber message passing system to publish and subscribe to different topics of communication. Figure 4.2 shows an example of the graph structure of a ROS system during runtime. The nodes are represented with ellipses and the topics by rectangles. The arrows between nodes and topics show the flow of data in the system, i.e. if a node is publishing or subscribing to a topic or not. The rates of which publishing and subscribing takes place can be individual for each node and chosen suitable for the specific case.
Figure 4.2: An example of the graph-architecture of ROS.
4.2.1
ROS driver for the Parrot Ar.Drone
The ROS package ardrone_autonomy includes a ROS driver for the Parrot AR.Drone. The driver is based on the official Parrot SDK and makes it possible to develop ROS applications for the drone. It is developed and shared by the Autonomy Lab of Simon Fraser University. The driver publishes the data sent from the drone to several different ROS topics, the ones considered in this work are • ardrone/navdata_altitude, with messages containing altitude information. • ardrone/imu - with messages containing IMU data.
• ardrone/navdata_gps - with messages containing GPS position.
The user can specify if the data should be sent from the drone at 5 Hz or 200 Hz Hz. It can also be specified if the driver then should publish the incoming data directly or at a fix rate. Reading the GPS data is not supported in the current version of the driver or in the Parrot SDK. Therefore, the driver and SDK had to be modified in this work to include support for GPS. Since the drone sends data at a faster rate (200 Hz) than the GPS update frequency (5 Hz), the same GPS data may be received several times in a row. This can be compensated for by comparing their timestamps set by the drone which are the same if the data is same, and also knowing that the GPS only updates five times per second.
4.3 System overview
4.3
20
System overview
A system where a pilot can control the AR.Drone while continuously getting updated information of the drone’s pose was implemented. This was done in the c++ language using the ROS framework. The previously mentioned ardrone_autonomy package was used for communication with the drone while a new ROS package for state estimation was developed. An overview of the system is shown in figure 4.3. The state estimator uses a complementary filter in feedback configuration to fuse information from the IMU, GPS and sonar sensors of the drone.
Figure 4.3: Overview of the implemented system. The blocks represents different ROS packages. The state estimator was implemented in this thesis.
4.4
Sensor fusion implementation IMU
INS
Filter
h( ) +
GPS & Sonars
Figure 4.4: The implemented complementary filter in feedback configuration. The aiding sensor block in figure 3.1 has been replaced by GPS and Sonars. h() predicts the positions given from GPS and Sonars.
⇣ ⌘ b n is driven by the IMU meaThe propagation of the the navigation state, pˆn , vˆn , R b surements according to the inertial navigation equations (2.6). A quaternion representation of the attitude is used for stability reasons [11], [9]. The angular velocity of earth, !ie is set to zero in the INS mechanization equations (2.6), assuming that it is negligible in the presence of sensor noise and bias. The update frequency of the INS is 200 Hz. The GPS and sonar sensor provides measurement of the position, pn,GP S,Alt . They are modeled using the linear model presented in section 3.3. Whenever a position measurement is available from both the GPS and sonars, a linear Kalman filter compares the measurements with the predicted measurement according to the INS state and estimates the deviations ( pn , v n , ✓ n ) and the biases f b , ! b . The estimated deviations are the used to correct the current INS output according to equation (2.12). The estimated biases are added to the previously known bias estimate fkb fkb 1 + fkb , !kb !kb 1 + !kb . Since the deviations
21
Experimental Platform and Implementation
ˆ ⌘ 0 after each measurement update. This means that the predicted are corrected for, x deviation also are equal to zero and only the covariance matrix P has to been updated in the prediction step. The algorithm for the implemented state estimation can be seen in Algorithm 2. The turn on biases of the IMU are estimated by taking an average of measurements for a short time period during which the quadcopter remains stationary. The same procedure is done to determine the initial GPS position. The initial position of the GPS is necessary for the transformation of GPS measurements from geodetic coordinates in the {e}-frame to the {n}-frame. Algorithm 2 Algorithm for the implemented state estimation. ˆ 0| 1 1: Initialize P0| 1 and x ˆ 0b 2: Compute turn on biases for IMU, fˆ0b , ! 3: Compute GPS reference position. 4: for i=0,... do ˜ kb . 5: Get latest IMU measurements, f˜kb , ! 6: // Correct for biases 7: f˜kb = f˜kb fˆkb b b ˜k = ! ˜k ˆ kb 8: ! ! bn 9: Propagate INS estimates, pˆnk , vˆkn , R b,k 10: if (new position measurement available) then 11: Transform GPS measurement to {n}-frame S,Alt 12: Form measurement pn,GP k 13: // Measurement update: S,Alt S 14: ✏k = pn,GP pˆn,IN k k 15: Sk = HPk|k 1 H > + R 1 16: Kk = Pk|k 1 Hk> (Sk ) . ˆ k = Kk ✏ k 17: x 18: Pk|k = (I Kk H) Pk|k 1 ˆk . 19: Correct nominal INS estimates using x 20: P Pk|k 21: else 22: P Pk|k 1 23: end if 24: //Time update: 25: Form Fk and Gk . 26: Pk+1|k = Fk P F > + GQG> 27: end for
4.4.1
Choice of Q and R
The noise parameters for the Q and R matrices were chosen with the help of experiments and some manual tuning based on observed performance. Figure 4.5 shows the results of an IMU test where the quadcopter was placed on a flat surface and remained stationary for 45 minutes. The expected outputs of the accelerometer would be 0 m/s2 for the horizontal axes and 9.81 m/s2 in z-direction due to the gravitation of earth. The expected outputs of the gyroscope would be 0 rad/s around all the three axes. However, both of the inertial sensors are subjected to noise and bias. The noise is represented by the fluctuations in the figures 4.5a and 4.5b. That the sensors are subjected to biases is visible in 4.5a as an offset from the expected outputs. In figure 4.5b the gyroscope biases are not equally easy to see but they exist. For both of the inertial sensors, the biases also vary with time. The values for the diagonal elements in Q, i.e. the variances of the process noise vector wk , were chosen
4.4 Sensor fusion implementation
22
Table 4.1: Sensor noise parameters. The values are given on a per sample basis. f
[m/s2 ]
0.85
f
[m/s2 ]
!
0.0089
[m/s2 ]
0.15
[m/s2 ]
!
x
0.0012
[m]
1.22
[m]
y
1.22
z
[m]
0.14
with this test as a base but modified with pessimism after it was observed that the sensor behavior during motion was different from the stationary case. Q was therefore set as ⇣ ⌘ Q = diag f2 ⇥ I3 , !2 ⇥ I3 , 2f ⇥ I3 , 2! ⇥ I3
were the standard deviations ( f , ! , f , ! ) are tabulated in 4.1. The values are given per sample. The elements of the measurement covariance matrix R were chosen with stationary tests of the GPS receiver and sonar sensor as a initial guess and then tuned until the filter’s performance was satisfactory. R was then set as R The specific values of
x,
y
and
= z
diag
2 x,
2 z
(4.1)
.
are tabulated in 4.1.
0.8
0.01
ωbx [rad/s]
0.7
f b [m/s2 ]
2 y,
x
0.6 0.5
0.005 0
-0.005
0.4 0
0.5
1
1.5
t [samples]
2
-0.01
2.5
0
×10 5
0.8
0.5
1
1.5
t [samples]
2
2.5 ×10 5
0.02
ωby [rad/s]
f b [m/s2 ]
0.6
y
0.4 0.2 0 -0.2 0
0.5
1
1.5
t [samples]
2
0.01 0 -0.01 -0.02
2.5
0
×10 5
9.5
0.5
1
1.5
t [samples]
2
2.5 ×10 5
0.01
ωbz [rad/s]
0.005
9.3 9.2
z
f b [m/s2 ]
9.4
9.1
0
-0.005
9 0
0.5
1
1.5
t [samples]
(a) Accelerometer.
2
2.5 ×10 5
-0.01 0
0.5
1
1.5
t [samples]
2
2.5 ×10 5
(b) Gyroscope.
Figure 4.5: Stationary IMU test. The noise is seen as high frequency fluctuations, the bias as offsets from the expected values and the bias drift as the changes in these offsets over time.
Chapter 5
Experiments and Results In this chapter, the performed experiments for testing the implemented sensor fusion algorithm are presented. The results of the experiments are shown and the methods for evaluating the results are described. The chapters ends with a summary and discussion of the results.
5.1
General setup of the experiments
To test the performance of the implemented filtering algorithm, three different tests were performed. The choice of tests was made so that different properties of the algorithm were tested. The first two tests were executed outside on a football field, an open area were the GPS should have good performance. The estimations were made on-line during the tests. The third test was divided into two parts and made off-line with logged sensor data from the first two tests. First the robustness for loss of GPS signals was tested by removing GPS data during 5 s of the test. Then it was tested how well the system performs with bad GPS coverage by artificially down-sample the GPS measurements. These two robustness tests would simulate how well the estimation would work if the quadcopter was to perform operations in areas with made made structures and buildings, i.e. areas where the GPS is unreliable.
Figure 5.1: The path of test nr 1. The path was traversed clockwise.
In the first two test, the quadcopter was to move along the two different predefined paths shown in figures 5.1 and 5.2 and it was evaluated if the estimated paths corresponded to the expected ones. To be sure that the predefined paths actually were followed, the quadcopter was carried around in a hand during the test. This was to avoid error sources like wind throwing the drone of course and uncertainty in the manual steering. 23
5.1 General setup of the experiments
24
Figure 5.2: The path of test nr 2. The path was traversed anti-clockwise. Table 5.1: Initial values for the error covariance matrix P .
p
p
P1 [m] 1.22
p
1.22
P7 [rad]
p
0.05 p
5.1.1
3
p
P8 [rad]
p
p
3
P4 [m/s]
p
0.1
P9 [rad] 0.16
P14 [rad/s] 8 ⇥10
P3 [m] 1.22
p
0.05
P13 [rad/s] 8 ⇥10
p
P2 [m]
p
p
0.1
P10 [m2 /s] 8 ⇥10
P5 [m/s]
3
p
0.1
P11 [m2 /s] 8 ⇥10
P6 [m/s]
3
p
P12 [m2 /s] 8 ⇥10
3
P15 [rad/s] 8 ⇥10
3
Initial conditions
In all the experiments the quadcopter started standing flat on the ground in origin of the {n} coordinate system with zero velocity. The initial yaw angle was approximated as 0 =13.5 . 2 ) 2 R15⇥15 , with The error covariance matrix was initialized as P0| 1 = diag ( P12 P22 ,..., P15 the individual elements as in table 5.1. The values reflects that there are some uncertainty in the initial position due to GPS errors. The initial velocity is assumed to be zero with high certainty. The values of P7 , P8 and P9 correspond to a standard deviation equal to 3 of the initial roll and pitch estimates and circa 10 for the initial yaw estimate. This reflects a relative certainty that the quadcopter started on flat ground but that the initial yaw estimate may have deviated a bit from the true north.
5.1.2
Accuracy measurements
Since the only known ground truth for the experiments was the known shape of the traveled paths, two different ways to determine the accuracy of the estimations were decided. For the first path, the rectangular one, the lateral accuracy of the horizontal position estimates was calculated by transforming pˆnx , pˆny into the frame of the path, i.e. a frame with x- and y-axes along the sides of the rectangular path. Knowing that always one of the position components should ideally be zero, lateral deviations for all four sides could be calculated. The RMSE of the deviations was computed for evaluation. For the circular path, the accuracy of the horizontal positions was determined in a similar way. However, the ground truth was determined by fitting a circle to the position measurements. Then the distance from each of the horizontal positions estimates to the center of the fitted circle was computed and compared to the true radius of the fitted circle. As with the rectangular path, the RMSE of the deviations was computed for evaluation.
25
5.2 5.2.1
Experiments and Results
Experiment 1, a rectangular path Setup of test 1
The quadcopter was carried around one lap clockwise in the rectangular path shown in figure 5.1. It started flat on the ground, then lifted up and tipped so that the nose was pointing slightly downwards ('10 to 20 ). The same pose (relative the pilot carrying it) and altitude were kept during the whole lap. In each corner of that path, the yaw angle was increased by 90 . The lap was completed by putting the drone back in the initial position. The average traveling speed was '1.2 m/s.
5.2.2
Results of test 1
A plot over the estimated trajectory in three dimensions is seen in figure 5.3. In figure 5.4 the horizontal position estimates (orange line) are compared to the GPS measurements (blue stars). The dashed circles are the 95% confidence ellipses for the position error estimates representing the uncertainty of the estimates. See appendix C for a explanation of confidence ellipses. Overall, the estimates and measurements are the same with some differences in the corners. The propagation over time of the estimated altitude is shown in figure 5.5. The
yn [m]
1.5 1 0.5 0 20 10 0 n
x [m]
-10
10
0
-10
-20
-30
-40
-50
yn [m]
Figure 5.3: The Estimated position in 3D, test nr 1.
dashed lines are the ±3 pi levels, where pi is extracted from the diagonal elements from the filter’s error covariance matrix, P . The altitude estimation is kept constant with some small variations. The RMSE for the lateral deviations was computed as described in section 5.1.2 and equal to 0.42 m, also tabulated in table 5.2. The estimated attitude are shown in figure 5.6. The angles have been unwrapped using the matlab command unwrap to avoid jumps of 360 in the plots. The roll and pitch angles varies between their initial values and ± 10 to 20 degrees with quick changes at the corners of the path. The estimated ˆ exhibits changes of ⇡ 90 at the corners. The final values of the estimates are, ˆ =-1.37 , ✓ˆ =0.77 and ˆ = 15.9 . Hence, the largest error is 2.4 (for ).
5.3 5.3.1
Experiment 2, a circular path Setup of test 2
The quadcopter was carried around one lap anti-clockwise in the circular path shown in figure 5.2. It started flat on the ground, then lifted up and carried around the path with a small negative tilt around the x-axis of the body fixed system and nose pointing slightly
5.3 Experiment 2, a circular path
26
pGP S ^ p 95% Conf.
20
xn [m]
10
0
-10
0
-10
-20
-30
-40
-50
n
y [m]
Figure 5.4: Estimated and measured position in 2D, test nr 1. The dashed circles are the 95% confidence ellipses.
^ z + 3< p ^z p ^ z ! 3< p
z n [m]
1.5 1 0.5 0 -0.5 0
25
50
75
100
t [s] Figure 5.5: Estimated pˆz . The -levels represents the uncertainty of the error estimates. For test nr 1.
downwards. (' 10 to 20 ). The same pose (relative the pilot carrying it) and altitude were kept during the whole lap. The average speed kept during the test was 0.7 m/s.
5.3.2
Results of test 2
Figure 5.7 shows the estimated trajectory in the second test in three dimensions. Figure 5.8 shows the estimated horizontal positions and the positions provided via GPS. The dashed circles are the 95% confidence ellipses for the position error estimates. The accuracy of the horizontal position estimates was computed as described in section 5.1.2 and the resulting RMSE for the deviations was 0.20 m, also tabulated in table 5.2. The propagation of the estimated altitude is shown in figure 5.9. The dashed lines are pˆi ± 3 pi , where the standard deviations pi are extracted from the error covariance matrix P . The altitude estimate is kept constant with some small variations. Note that it increases to about 23 cm in the beginning and also stabilizes at the same altitude in the end. This corresponds to the lower limit of the two sonar sensors. The estimated attitude are shown in figure 5.10. The angles have been unwrapped using the matlab command unwrap to avoid jumps of 360 in the plots. ˆ is constantly decreasing and the final angle is equal to the start angle. ˆ and ✓ˆ both follow a sinusoidal with -20 and +20 as the smallest and largest angle
27
Experiments and Results Attitude estimates ?^ + 3< ?^ ?^ ! 3<
? [deg]
20 0 -20 0
25
50
75
100
t [s] 3^ + 3< 3^ 3^ ! 3<
3 [deg]
20 0 -20 0
25
50
75
100
t [s]
A [deg]
360
A^ + 3< A^ A^ ! 3<
270 180 90 0 0
25
50
75
100
t [s]
Figure 5.6: The Euler angles of the estimated attitude. For test 1.
y n [m]
1.5 1 0.5 0 10
0
0
xn [m]
10 -10
20
y n [m]
Figure 5.7: The Estimated position in 3D, test nr 2.
respectively. The final values are ˆ=0.72 , ✓ˆ =-1.13 and ˆ =8.37 , hence between 0.7 and 5.1 from the initial values. As for the first experiment, the largest error corresponds to the heading.
5.4 Experiment 3, a robustness test
28
pGP S ^ p 95% Conf.
xn [m]
10
0
-10 20
10
0 n
y [m]
Figure 5.8: Estimated and measured position in 2D, test nr 2. The dashed circles are the 95% confidence ellipses.
^ z + 3< p ^z p ^ z ! 3< p
1.5
z n [m]
1 0.5 0 -0.5 0
25
50
t [s]
Figure 5.9: Estimated pˆz . The -levels represents the uncertainty of the error estimates. For test nr 2.
5.4 5.4.1
Experiment 3, a robustness test Setup and results of test 3
The third experiment was performed with logged data from previous experiments with a rectangular and circular path respectively. For both of the paths, a loss of GPS signals was simulated by removing measurements for 5 seconds during the test. A simulation of bad GPS coverage were performed by artificially downsample the GPS from ⇡ 5 Hz to 0.7 Hz. Figures 5.11 and 5.13 show that a loss of GPS signals results in deviations from the actual paths but that the INS manages to capture the motions. The deviations are corrected for as soon as GPS signals are received again. The loss of GPS signals is reflected in the uncertainty of the filter, illustrated by an increasing radius of the confidence ellipses shown as dashed circles in figures 5.11 and 5.13. The simulations of weak GPS signals, made by downsampling the GPS receiver show that longer periods between position observations result in deviations from the actual path between two consecutive GPS corrections. This is visible in figures 5.14 and 5.12 as a sawtooth pattern. For the circular path, figure 5.14, the saw tooth pattern is most visible in
29
Experiments and Results Attitude estimates ?^ + 3< ?^ ?^ ! 3<
? [deg]
20 0 -20 0
25
50
t [s] 3^ + 3< 3^ 3^ ! 3<
3 [deg]
20 0 -20 0
25
50
t [s]
A [deg]
360
A^ + 3< A^ A^ ! 3<
270 180 90 0 0
25
50
t [s] Figure 5.10: The Euler angles of the estimated attitude. For test 2.
the beginning. For the rectangular path, figure 5.12, the pattern is most visible after each turn and less visible right before a turn. As for the experiment 1 and 2, the accuracy of the estimations and GPS provided position measurements were calculated. The values are tabulated in table 5.2. The estimated attitude for each of the four cases is shown in the figures 5.15, 5.16, 5.17 and 5.18. When compared to the estimated attitude when no GPS deficiencies are present, figures 5.6 and 5.10, there are no visible differences.
5.5
Summary and discussion
The results of experiment 1 and 2 show that the system is able to track both position and attitude of the quadcopter for differently shaped paths, even when position measurements arrive less frequently. The accuracy of the position estimates was evaluated by computing the RMSE of the estimated horizontal position positions. The values tabulated in table 5.2 show that the RMSEs are below 0.5 m in all cases except when GPS deficiencies are present and traversing the rectangular path. Not that comparing the errors for the two paths should not be done since the errors were computed in different ways. The attitude estimates in experiment 1 and 2 can be considered correct since they both propagates as expected over time and that the final values of the Euler angles are close to the initial values. It is only the heading in experiment 1 that differs more than 2 from
5.5 Summary and discussion
30
pGP S ^ p 95% Conf.
20
xn [m]
10
0
-10
0
-10
-20
-30
-40
-50
n
y [m]
Figure 5.11: Estimated positions in a rectangular path when simulating a loss of GPS signals.
pGP S ^ p 95% Conf.
20
xn [m]
10
0
-10
0
-10
-20
-30
-40
-50
n
y [m]
Figure 5.12: Estimated positions in a rectangular path when simulating bad GPS coverage.
the initial value. That the heading estimate exhibit drift for the rectangular path but not for the circular path may be explained by that the fact that the quadcopter was held still, i.e. hovering in each of the corners of the rectangle and therefore the yaw motion becomes unobservable for a short period. The propagation of the angles over time does exhibit the expected behavior for both experiment 1 and 2. This can most clearly be seen if looking at ˆ in each case. In test 1, there are distinct jumps of 90 with constant values in between. Since the heading was the same along the straight segments of the path and changed with 90 in the corners the estimates clearly capture this behavior. For test 2, the heading should decrease approximately linearly with time since the path was a circle traversed anti-clockwise with approximately constant speed, this is also clearly seen in figure 5.10. Regarding the roll and pitch angles, since the nose was constantly pointing downwards with approximately the same angle all the time
31
Experiments and Results
pGP S ^ p 95% Conf.
xn [m]
10
0
-10 20
10
0 n
y [m]
Figure 5.13: Estimated positions in a circular path when simulating a loss of GPS signals. 20 pGP S ^ p 95% Conf.
xn [m]
10
0
-10 20
10
0
-10
n
y [m]
Figure 5.14: Estimated positions in a circular path when simulating bad GPS coverage
the estimated roll and pitch angles should form a square wave in the first experiment and a sinus shaped wave in the second experiment, that this is true can be seen in figures 5.6 and 5.10. In the last experiment, the robustness for bad GPS coverage and a loss of GPS signals were simulated. It can be seen both by the calculated RMSE tabulated in table 5.2 and in the corresponding figures that the system is negatively affected by both loss of GPS signal and bad GPS coverage. Judging by the calculated RMSE values, the result is affected the most by bad GPS coverage. When a loss of GPS signals occurs, the position estimates starts to deviate since no correction of the INS states are made. The deviations are corrected for as soon as GPS
5.5 Summary and discussion
32 Attitude estimates ?^ + 3< ?^ ?^ ! 3<
? [deg]
20 0 -20 0
25
50
75
100
t [s] 3^ + 3< 3^ 3^ ! 3<
3 [deg]
20 0 -20 0
25
50
75
100
t [s]
A [deg]
360
A^ + 3< A^ A^ ! 3<
270 180 90 0 0
25
50
75
100
t [s] Figure 5.15: Attitude estimates when traversing the rectangular path and simulating a loss of GPS signals.
signals are available again; Then INS can again accurately estimate the navigation state in between the position corrections. The simulations of bad GPS coverage show two things. First, the INS state starts to deviate in between the position observations. Second, when in motion, getting position measurements corrects for errors in the attitude estimations. This is most visible in figure 5.12 where the position estimates after each turn are off, due to uncorrected errors of the INS. When the system is in motion towards the next corner and thereby excited, the motion combined with position measurements corrects the INS errors and the estimates are gradually improved. The same conclusion can be drawn from figure 5.14, here the system is in motion during the whole lap and the INS deviations are smaller than in the rectangular path. Hence, exciting the system with motion improves estimation of INS deviations, yielding better estimations of the navigation state, even when position observations are made less frequently.
33
Experiments and Results
Attitude estimates ?^ + 3< ?^ ?^ ! 3<
? [deg]
20 0 -20 0
25
50
75
100
125
t [s] 3^ + 3< 3^ 3^ ! 3<
3 [deg]
20 0 -20 0
25
50
75
100
125
t [s]
A [deg]
360
A^ + 3< A^ A^ ! 3<
270 180 90 0 0
25
50
75
100
125
t [s] Figure 5.16: Attitude estimates when traversing the rectangular path and simulating bad GPS coverage.
Table 5.2: error statistics from the performed experiments. Scenario 1 refers to test 1 and 2. Scenario 2 and 3 are with simulated loss of GPS signals and lower update frequency of the GPS receiver respectively.
Rectangular path
Circular path
RMSE(ˆ pnx,y ) [m]
RMSE(ˆ pnx,y ) [m]
Scenario 1
0.42
0.20
Scenario 2
0.58
0.29
Scenario 3
1.16
0.34
5.5 Summary and discussion
34
Attitude estimates ?^ + 3< ?^ ?^ ! 3<
? [deg]
20 0 -20 0
25
50
t [s] 3^ + 3< 3^ 3^ ! 3<
3 [deg]
20 0 -20 0
25
50
t [s]
A [deg]
0
A^ + 3< A^ A^ ! 3<
-90 -180 -270 -360 0
25
50
t [s] Figure 5.17: Attitude estimates when traversing a circular path and simulating a loss of GPS signals.
35
Experiments and Results
Attitude estimates ?^ + 3< ?^ ?^ ! 3<
? [deg]
20 0 -20 0
25
50
t [s] 3^ + 3< 3^ 3^ ! 3<
3 [deg]
20 0 -20 0
25
50
t [s]
A [deg]
360
A^ + 3< A^ A^ ! 3<
270 180 90 0 0
25
50
t [s] Figure 5.18: Attitude estimates when traversing a circular path and simulating bad GPS coverage.
5.5 Summary and discussion
36
Chapter 6
Concluding Remarks 6.1
Conclusions
A sensor fusion algorithm able to estimate the positions and attitude of a quadcopter has been implemented and tested. The implementation was done using c++ and ROS and tested with the low cost quadcopter AR.Drone 2.0. The performed experiments showed that the accuracy of the horizontal position estimates are good, the RMSE of the estimates are within 0.6 m for all cases except one, including when GPS deficiencies are present. The accuracy of the altitude estimates are not quantified but deemed accurate. The experiments also show that the attitude of the quadcopter is correctly estimated by the implemented estimation algorithm. However, when the quadcopter is hovering at a point for seconds, the estimated heading exhibits a slightly drift (' 5 ). A solution for this can be either to integrate an magnetometer in the system, as done in [2] or to use visual tags [8]. If the use of visual tags were implemented, the system would be usable in indoor areas. Furthermore, as also concluded by previous research, the complementary filter in feedback form has several advantages. First of all, it is a useful and reliable framework for sensor fusion of information from inertial sensors and aiding sensors. Secondly, the filter provides accurate information about both position, velocity and orientation of a quadcopter with high-rate thanks to the complementary characteristics of the sensors. Thirdly, that a dynamic model of the quadcopter is not needed can simplify when it is modified or changed to another one. And last, since the filter also provides the uncertainty of the estimated navigation states, a control algorithm for autonomous navigation that adjust the control according to the uncertainty can be applied.
6.2
Further work
Here follows some recommendations for future work based on the findings of this thesis. • The accuracy of the system could be further investigated with more sophisticated methods. Like for example using a system consisting of IR cameras and IR tags on the quadcopter to determine a more accurate ground truth to compare the position estimates with. Provided that a fast and accurate system for determining the ground truth is available, the experiments would not be constricted to following known paths. • In the experiments, it was observed that the heading had a slight drift due to integration of noisy measurement from the gyroscope. This might be a bigger problem when the quadcopter hovers for longer time periods. Therefore it is suggested to either 37
6.2 Further work
38
integrate a magnetometer in the system or use a infrastructure of visual tags. The choice may depend on the future application of the system. • If the system should be used together with a closed loop control system for autonomous flights the real-time performance of the system should be tested. As stated in section 4.1 the data link over wifi results in delays between the AR.Drone and connected computer. Possible solutions for this might be either to compensate for them using a prediction method or to perform computations on-board the quadcopter. For the AR.Drone, the only possible solution is to use a prediction method. • Furthermore, the use of the sonar sensors constricts the usage to flat ground and they need to be replaces or combined with an air-pressure sensor or other altitude-providing sensor for broader usage.
Bibliography [1] Stergios Roumeliotis, Gaurav Sukhatme, George A Bekey, et al. “Circumventing dynamic modeling: Evaluation of the error-state kalman filter applied to mobile robot localization”. In: Robotics and Automation, 1999. Proceedings. 1999 IEEE International Conference on. Vol. 2. IEEE. 1999, pp. 1656–1663. [2] Martin Barczyk and Alan F Lynch. “Integration of a triaxial magnetometer into a helicopter UAV GPS-aided INS”. In: Aerospace and Electronic Systems, IEEE Transactions on 48.4 (2012), pp. 2947–2960. [3] Xiaolin Hou et al. “Kalman filter based on error state variables in SINS+ GPS navigation application”. In: Information Science and Technology (ICIST), 2011 International Conference on. IEEE. 2011, pp. 770–773. [4] Geng Qingbo, Li Nan, and Li Baokui. “The application of GPS/SINS integration based on Kalman filter”. In: Control Conference (CCC), 2012 31st Chinese. IEEE. 2012, pp. 4607–4610. [5] Yang PeiPei and Li Qing. “Application on kalman filtering of SINS/GPS navigation system”. In: Electronic Measurement & Instruments, 2009. ICEMI’09. 9th International Conference on. IEEE. 2009, pp. 3–354. [6] Jakob Engel, Jürgen Sturm, and Daniel Cremers. “Scale-aware navigation of a low-cost quadrocopter with a monocular camera”. In: Robotics and Autonomous Systems 62.11 (2014), pp. 1646–1656. [7] Roland Brockers et al. “Towards autonomous navigation of miniature uav”. In: Computer Vision and Pattern Recognition Workshops (CVPRW), 2014 IEEE Conference on. IEEE. 2014, pp. 645–651. [8] Dave Zachariah. “Estimation for Sensor Fusion and Sparse Signal Processing”. PhD thesis. KTH Royal Institute of Technology, 2013. [9] Jay Farrell and Matthew Barth. The global positioning system and inertial navigation. McGraw-Hill, 1999. [10]
David Titterton and John Weston. Strapdown Inertial Navigation Technology. 2nd ed. The Institution of Engineering and Technology, 2004.
[11]
Fredrik Gustafsson. Statistical Sensor Fusion. 2nd ed. Studentliteratur AB, Lund, 2012.
[12]
Antoni Burguera, Yolanda González, and Gabriel Oliver. “Sonar sensor models and their application to mobile robot localization”. In: Sensors 9.12 (2009), pp. 10217– 10243.
[13]
Pierre jean Bristeau et al. “The Navigation and Control technology inside the AR.Drone micro UAV”. In: Proceedings of the 18th IFAC World Congress. 2011, pp. 1477–1484.
[14]
S. Piskorski et al. AR.Drone Developer Guide, SDK 2.0. Dec. 2012.
[15]
Documentation ardrone autonomy. June 2015. url: http : / / ardrone - autonomy . readthedocs.org/en/latest/. 39
BIBLIOGRAPHY [16]
ROS webpage. May 2015. url: http://www.ros.org.
40
Appendix A
Skew-symmetrical Matrices A vector cross product, a ⇥ b, can also be represented by the product between a skewsymetric matrix and vector. T
a ⇥ b = [a]x b = [b]x a
were the skew-symmetric matrix [a]x is defined by 2 3 0 a3 a2 0 a1 5 [a]x , 4 a3 a2 a1 0
41
42
Appendix B
Transformations between Reference Frames The ECEF coordinate system can be described with either rectangular coordinates or geodetic coordinates as described in [9]. The rectangular system is a orthogonal system with origin in the center of earth and rotates with earth. The geodetic coordinates defines a point relative to the WGS-94 defined ellipsoid. Further more, the local navigation frame {n} is an orthogonal coordinate system tangential to the reference ellipsoid. The origin of this system is usually picked as the starting point of the navigation. A GPS receiver calculates its position with the help of pseudorange measurements between itself and a number of satellites. > The position is given in the geodetic coordinates pgeo = [ , ', h] , longitude, latitude and altitude respectively. To transform the given position into a position in the local navigation frame, two steps are performed. The first step is to express the given position pgeo in the rectangular coordinates of the ECEF frame. The second step is to transform the resulting rectangular coordinates to the local navigation frame.
B.1
Transformation from geodetic to rectangular coordinates in the ECEF Frame >
The calculation of a position expressed in rectangular ECEF coordinates (pe = [xe , ye , ze ] ) > from the same position expressed in geodetic ECEF coordinates (pgeo = [ , ', h] ) is done by considering how the reference ellipsoid is defined. The following information can be found in [9]. The equations to use are xe
=
ye
=
ze
⇥
=
(N + h) cos(') cos( )
(B.1)
(N + h) cos(') sin( ) ⇤ N 1 e2 + h sin( ) .
(B.2) (B.3)
Where e is the so called eccentricity of the reference ellipsoid. N is the length of the normal to the ellipsoid, i.e the distance of a line perpendicular to the surface of the ellipsoid to the intersection with the z axis of the rectangular coordinate system. N is dependent of the latitude and the length of the ellipsoid’s semimajor axis, a. e
=
a
=
N (')
=
0.08181919 6378137.0 m a p 2 1 e sin(')2 43
(B.4) (B.5) (B.6)
B.2 Transformation from rectangular coordinates in the ECEF Frame to the local navigaton frame
B.2
44
Transformation from rectangular coordinates in the ECEF Frame to the local navigaton frame
Once a position is given in the rectangular coordinates of the ECEF frame it can be transformed to the local navigation frame. The transformation considers the difference between the origin of the local navigation frame and the current position, and also the orientation of the local navigation frame relative the ECEF frame. Let the origin of the local navigation frame expressed in the rectangular coordinates of the ECEF frame be > > pe,0 = [xe,0 , ye,0 , ze,0 ] . If the current position is pe = [xe , ye , ze ] , then the vector pointing from the origin of the {n} frame to the current position is xe = pe pe,0 . Transforming this vector xe to the {n} frame yields the current position in the {n} frame. This transformation is constructed by two plane rotations, first one about the z axis of the ECEF frame, then a second one around the new y axis. The resulting rotation matrix Ren is given in [9], with corrections for another definition of the {n} frame it looks like 2 3 sin('0 ) cos( 0 ) sin('0 ) sin( 0 ) cos('0 ) sin( 0 ) cos( 0 ) 0 5 . Ren = 4 (B.7) cos('0 ) cos( 0 ) cos('0 ) sin( 0 ) sin('0 ) 0 and '0 are the longitude and latitude respectively of the origin of the {n} frame. Using the above definition of xe and this rotation matrix, any position in the ECEF frame can be transformed to the {n} as pn = Ren xe . (B.8)
Appendix C
Confidence ellipses Let z be a 2-dimensional stochastic variable that is Gaussian distributed with mean mz and covariance matrix ⌃z , z s N (mz , ⌃z ). Then it can be said that z lies within the ellipse given by (z mz )> ⌃z 1 (z mz ) = 22 (↵) with probability 1 ↵. 2n (↵) is the Chi-squared distribution with n degrees of freedom. ˆ and the corresponding error covariance The Kalman filter provides the state estimate x ˆ x). Since both the estimated state x ˆ and true state x are assumed to be matrix P = COV(x ˆ x are normally distributed. Therefore the confidence normally distributed, also the error x ellipses can be calculated and used to represent the certainty of the filter’s estimates.
45