Module1 - Mtrl - Algorithms
Sensor fusion
If you have multiple sensors, each containing different information, you can use sensor fusion to combine them into an even better sensor. The most important algorithm for doing this is the Kalman filter.
Most of you will have seen the Kalman filter before, and if so you can probably skip to the Octave/Matlab example below. If it is the first time you see it, or if it's been a while, go through it more carefully so you understand the main ideas and can understand what follows.
Kalman filtering for newbies
- Understanding the KF Links to an external site. (7min)
- State Observers Links to an external site. (8min)
- Optimal State Estimation Links to an external site. (7min)
- Optimal State Estimaion 2 Links to an external site. (9min)
- Explanation in pictures and math Links to an external site.
- Or maybe you prefer this explanation Links to an external site.
Sensor fusion example in Octave/Matlab
Let us go through an example of sensor fusion where we combine two sensor measurement signals y1and y2 into an even better sensor signal by using the Kalman filter. The following code works both in Octave and Matlab. In the Algorithms quiz later you will be asked to enter some result from the calculations, so pay attention.
Create a new directory somewhere suitable and start octave.
mkdir sensorfusion
cd sensorfusion
octave
Start by defining the true signal s that we are interested in figuring out. Lets pick a signal that has both some slow, medium and fast components. We will use a sample rate of 1kHz and a signal length of 1 second.
dt = 1e-3; % sample rate 1kHz
t = dt*(1:1000);
N = length(t);
s = t + 0.5*sin(2*pi*10*t) - 2*(t>0.5); % the true signal
The text after the percent sign % is a comment. You don't need to write that.
We will assume that two measurements of the true signal s (blue) are available. The measurement y1 (red) comes from a sensor that contains some slow sensor dynamics and some noise. A good thing with the sensor is though that it does not have any bias. If the signal s was slow enough, we could use a low-pass filtered version of y1 with good results. We also have another sensor measurement y2 (green) which is fast and has less noise, but it has an irritating slowly varying bias offset. If the signal s was fast enough, we could use a high-pass filtered version of y2 with good results. This will not work well now however, since s also has some slow components that we would then lose. Here is the definition of y1 and y2.
a = 0.97; % Sensor speed, range [0,1), smaller a-value gives faster sensor
sigma_e1 = 0.1; % Noise level on red measurements
sigma_e2 = 0.01; % Noise level on green measurements
randn('state',0) % Initialise random number generator
e1 = sigma_e1*randn(1,N); % Measurement noise
e2 = sigma_e2*randn(1,N); % Measurement noise
bias = 0.3 + 2*sin(t); % Slowly varying bias
y1 = filter(1-a,[1 -a],s) + e1 ; % Red signal, slow no bias
y2 = s + bias + e2; % Green signal, fast but with bias
The row with the filter command implements the first order dynamics y1(k)=ay1(k−1)+(1−a)s(k)+e(k). Type "help filter" for details.
Let's plot the signals.
figure(1)
plot(t,s,'b',t,y1,'r',t,y2,'g')
hold on
The result should look something like this
Neither the red (y1) or green (y2) sensor measurements give a good estimate of the blue true signal (s).
As we will see now, they can however be combined (fused) into a good sensor signal with the Kalman filter. The algorithm will produce a combination of a low-pass filtered (slow) version of the red signal with a high-pass filtered (fast) version of the green signal. This is done automatically with good results, as long as the statistical properties of the different signals are properly described, in the language understood by the Kalman filter.
The mathematical description that we will use for our signals is
x1(k)=ax1(k−1)+(1−a)x3(k−1),sensor dynamicsx2(k)=x2(k−1)+w2(k),random walk model of biasx3(k)=x3(k−1)+w3(k),random walk model of signal sy1(k)=x1(k)+e1(k),slow measurement without bias y2(k)=x2(k)+x3(k)+e2(k),fast measurement with bias
which is summarized as
x(k)=Fx(k−1)+w(k),cov(w)=Qy(k)=Hx(k)+e(k),cov(e)=R
In Octave/Matlab this becomes
sigma_bias = 0.02; % Estimate of bias change rate, found by some trial and error
sigma_signal = 0.06; % Estimate of signal change rate, same
Q = blkdiag(0,sigma_bias^2,sigma_signal^2); % process noise covariance matrix
R = blkdiag(sigma_e1^2,sigma_e2^2); % measurement noise covariance matrix
F = [a 0 1-a; 0 1 0; 0 0 1];
H1 = [1 0 0]; % slow sensor
H2 = [0 1 1]; % fast sensor with bias
H = [H1;H2];
y = [y1;y2];
%ind = 1; % only first measurement available
%ind = 2; % only second measurement available
ind = 1:2; % both measurements available
y = y(ind,:); % pick out relevant parts
H = H(ind,:);
R = R(ind,ind);
The reason for the indexing stuff is to prepare for experiments with different sensors.
The famous Kalman filter consists of the following code snippet
% Initialise Kalman filter
P = 0*eye(3); % influences initial convergence
xhat = zeros(3,N); % storage for state estimates
% Run Kalman filter
for k = 1:N
% time update
if k>1, xhat(:,k) = F*xhat(:,k-1); end
P = F*P*F' + Q;
% measurement update
K = (P*H')/(H*P*H'+R);
xhat(:,k) = xhat(:,k) + K*(y(:,k)-H*xhat(:,k));
P = P - K*H*P;
end
shat = xhat(3,:); % the sensor fusion estimate of s(t)
In the loop, the estimate ˆx(k|k−1) is first calculated, i.e. the best prediction of the full state vector
x(k), obtainable using the measurements
y(1),…,y(k−1). This estimate is then updated, using the information available in the new measurement
y(k), producing the estimate
ˆx(k|k). The first calculation of
P gives the covariance matrix for the error
x(k)−ˆx(k|k−1), the later updated
P gives the error covariance matrix for
x(k)−ˆx(k|k). Since we assume
s=x3, our estimate of
s is given by the third component of
ˆx(k|k).
To visualize the result and calculate the rms error we write
figure(1)
plot(t,shat,'k')
legend('True signal', 'y1','y2','Kalman filter')
rmserror = sqrt(mean((shat-s).^2))
The result in Octave should be something like the following figure and the rms-value should be 0.073866. (There will be some differences between Octave and Matlab, since they use different random number generators.)
The Kalman filter estimate is given by the black line, which can be seen to follow the blue line quite well, hence giving a good combination of the information available in the red and green signals.
Now close the plots and rerun the filter with only measurement y1 available to the filter. This is done by
close all
ind = 1;
y = y(ind,:);
H = H(ind,:);
R = R(ind,ind);
followed by the Kalman filter code above. Plotting s, y1 and shat should produce a figure similar to the following
The black curve is the estimate produced by the Kalman filter using only the red measurement y1. It can be seen that the estimate now is given by a high-pass filtered version of
y1. Not using signal
y2has resulted in a worse estimation, illustrated by the higher rmsvalue.
Q. What is the new rmsvalue? You will need this information when answering the algorithm quiz later.
The code we have used above is included in the file fusionexample.m Download fusionexample.m.
Extra: Note that we have provided the Kalman filter with a crude inexact model of the signal s. We can therefore not claim that the Kalman filter produces statistically optimal results. The code contains support for experimenting with optimizing parameters sigma_bias and sigma_signal. Change back to ind=1:2 so both y1 and y2 are used in the filter, and then try to improve the resulting filter performance by changing sigma_bias and sigma_signal.
The code also contains support for plotting the frequency response of the Kalman filter (you need to uncomment some stuff), i.e. how much of the two different signals y1 and y2 that are used at different frequencies. Investigate how the filter's frequency characteristics change when you change different parameters. You will for example notice that more noise on a measurement automatically leads to that the Kalman filter reduces the use of that measurement.
Particle filters
The Kalman filter described above is targeted for situations with linear relations between signals and where the noise is Gaussian distributed. It works well also in nearby situations, and there are variants of the KF designed to extend its usefulness. These go under names as Extended Kalman Filter (EKF), Unscented Kalman Filter (UKF) and similar.
However, if the probability distribution of the resulting probability densities for the signals we are estimating are multi-modal, typically in situations where there are several possible, quite distinct, hypotheses about what has happened, these methods do not give satisfactory performance. We should then use more powerful methods of nonlinear estimation. These are unfortunately typically also more computationally costly. A popular alternative is particle filters.
To get some common knowledge about PFs go through the survey material below.
- Main idea of PF explained on a 1-D navigation example [5min] Links to an external site.
- Tracking example [3min] Links to an external site.
- Indoor localisation example Links to an external site.
Extra: If you want to be able to implement a particle filter yourself, you should also go through the nice material, provided by Andreas Svensson in Uppsala Links to an external site..
- An introduction to particle filters Links to an external site.
- The matlab code PF.m Links to an external site. for the 1-D navigation example earlier
RANSAC
Random sample consensus (RANSAC) is an iterative method to estimate parameters of a mathematical model from a set of observed data that contains outliers, when outliers are to be given no influence on the values of the estimates.
Go through these presentations
- RANSAC
Simultaneous Localisation and Mapping (SLAM)
When you need to localise yourself and build a map simultaneously, your need to use a SLAM algorithm. This is a popular area of research in autonomous systems. Here are some survey material.
- Introduction to SLAM by Sebastian Thrun Links to an external site. (2min)
- Introduction to Graph SLAM by Sebastian Thrun Links to an external site. (4min)
(Extra) More details for the interested
- Visual odometry tutorial by Davide Scaramuzza Links to an external site. (SfM -> SLAM -> VO)
- Additional material (Links to an external site.)
-
SLAM tutorials
- Hugh Durrant-Whyte & Tim Bailey. “Simultaneous Localisation and Mapping (SLAM): Part I The Essential Algorithms“, (2006) (pdf (Links to an external site.))
- Tim Bailey & Hugh Durrant-Whyte and . “Simultaneous Localisation and Mapping (SLAM): Part II State of the Art“, (2006) (pdf (Links to an external site.))
- Grisetti et al, "A Tutorial on Graph-Based SLAM", (pdf) (Links to an external site.)
- Visual odometry tutorial
ORB_SLAM2
On your external disk / Virtual machine you have an example of a state of the art vision based SLAM system, ORB_SLAM2 that you can easily play with using a camera.
In one window run
roscore
In a second window start the camera service
rosparam set video/device_id 1
rosrun cv_camera cv_camera_node
Now we need to tell ORB SLAM about the camera
cp ~/software/ORB_SLAM2/Examples/ROS/ORB_SLAM2/Asus.yaml MyCam.yaml
Edit MyCam.yaml with the parameters for the camera calibration
ORB_SLAM expects the image to be published in /camera/image_raw so we need to republish it
rosrun image_transport republish raw in:=/cv_camera/image_raw out:=/camera/image_raw
Now run ORB SLAM
~/software/ORB_SLAM2/Examples/ROS/ORB_SLAM2/Mono ~/software/ORB_SLAM2/Vocabulary/ORBvoc.txt MyCam.yaml
More Bonus material on Algorithms
Radio-based SLAM where a map of the local radio environment is built and phase-based positioning with high accuracy is achieved is an exciting recent area of research (an opinion totally unrelated to the fact that the author of this paragraph was a supervisor of the early work presented in the following thesis). For an introduction to the ideas and an analysis of fundamental limits of performance see
More bonus material
- IMU calibration
-
How to model IMU sensor imperfections (bias, random walk etc)
- More sensor fusion
- Information inequality and the Cramer-Rao Lower Bound (Links to an external site.)
- Particle filters, algorithmic details (extra material) [33min] (Links to an external site.)
- Matlab code for Allan Variance plotting (Links to an external site.)
- ICP (Iterative Closest point) Video that gives the idea for how two sets of points can be registered/aligned Links to an external site.
- Slides from Burgard et al Links to an external site. Links to an external site.
- Video showing how the position of a robot is estimated using ICP in a frame to frame fashion (dead reckoning) Links to an external site.