Module1 - Mtrl - Algorithms

Sensor fusion

If you have multiple sensors, each containing different information, you can use sensor fusion to combine them into an even better sensor. The most important algorithm for doing this is the Kalman filter.

Most of you will have seen the Kalman filter before, and if so you can probably skip to the Octave/Matlab example below. If it is the first time you see it, or if it's been a while, go through it more carefully so you understand the main ideas and can understand what follows.

Kalman filtering for newbies

Sensor fusion example in Octave/Matlab

Let us go through an example of sensor fusion where we combine two sensor measurement signals LaTeX: y_1 \textrm{and } y_2y1and y2 into an even better sensor signal by using the Kalman filter.  The following code works both in Octave and Matlab. In the Algorithms quiz later you will be asked to enter some result from the calculations, so pay attention.

Create a new directory somewhere suitable and start octave.

mkdir sensorfusion
cd sensorfusion
octave

Start by defining the true signal LaTeX: ss that we are interested in figuring out. Lets pick a signal that has both some slow, medium and fast components. We will use a sample rate of 1kHz and a signal length of 1 second.

dt = 1e-3;                              % sample rate 1kHz
t = dt*(1:1000);
N = length(t);
s = t + 0.5*sin(2*pi*10*t) - 2*(t>0.5); % the true signal

The text after the percent sign % is a comment. You don't need to write that.

We will assume that two measurements of the true signal s (blue) are available. The measurement y1 (red) comes from a sensor that contains some slow sensor dynamics and some noise. A good thing with the sensor is though that it does not have any bias. If the signal s was slow enough, we could use a low-pass filtered version of y1 with good results. We also have another sensor measurement y2 (green) which is fast and has less noise, but it has an irritating slowly varying bias offset. If the signal s was fast enough, we could use a high-pass filtered version of y2 with good results. This will not work well now however, since s also has some slow components that we would then lose. Here is the definition of y1 and y2. 

a = 0.97;                               % Sensor speed, range [0,1), smaller a-value gives faster sensor 
sigma_e1 = 0.1;      % Noise level on red measurements
sigma_e2 = 0.01;     % Noise level on green measurements
randn('state',0) % Initialise random number generator
e1 = sigma_e1*randn(1,N);               % Measurement noise
e2 = sigma_e2*randn(1,N);               % Measurement noise
bias = 0.3 + 2*sin(t); % Slowly varying bias
y1 = filter(1-a,[1 -a],s) + e1 ;        % Red signal, slow no bias
y2 = s + bias + e2;                     % Green signal, fast but with bias

The row with the filter command implements the first order  dynamics LaTeX: y_1(k)=ay_1(k-1)+(1-a)s(k)+e(k)y1(k)=ay1(k1)+(1a)s(k)+e(k). Type "help filter" for details.

Let's plot the signals.

figure(1)
plot(t,s,'b',t,y1,'r',t,y2,'g')
hold on

The result should look something like this

Signals s, y1 and y2

Neither the red (y1) or green (y2) sensor measurements give a good estimate of the  blue true signal (s).

As we will see now, they can however be combined (fused) into a good sensor signal with the Kalman filter. The algorithm will produce a combination of a low-pass filtered (slow) version of the red signal with a high-pass filtered  (fast) version of the green signal. This is done automatically with good results, as long as the statistical properties of the different signals are properly described, in the language understood by the Kalman filter.

The mathematical description that we will use for our signals is

LaTeX: x_1(k) = a x_1(k-1) + (1-a)x_3(k-1), \qquad \textrm{sensor dynamics}\\    
x_2(k) = x_2(k-1) + w_2(k),  \qquad \textrm{random walk model of bias}      \\      
x_3(k) = x_3(k-1) + w_3(k),    \quad \textrm{random walk model of signal }s   \\       
y_1(k) = x_1(k) + e_1(k),       \qquad \textrm{slow measurement without bias }\\       
y_2(k) = x_2(k) + x_3(k) + e_2(k), \quad \textrm{fast measurement with bias }
x1(k)=ax1(k1)+(1a)x3(k1),sensor dynamicsx2(k)=x2(k1)+w2(k),random walk model of biasx3(k)=x3(k1)+w3(k),random walk model of signal sy1(k)=x1(k)+e1(k),slow measurement without bias y2(k)=x2(k)+x3(k)+e2(k),fast measurement with bias 

which is summarized as

LaTeX: x(k) = Fx(k-1) + w(k), \quad cov(w) = Q\\
y(k) = Hx(k) +e(k), \qquad\;\;  cov(e) = Rx(k)=Fx(k1)+w(k),cov(w)=Qy(k)=Hx(k)+e(k),cov(e)=R

In Octave/Matlab this becomes

sigma_bias = 0.02;     % Estimate of bias change rate, found by some trial and error
sigma_signal = 0.06; % Estimate of signal change rate, same
Q = blkdiag(0,sigma_bias^2,sigma_signal^2); % process noise covariance matrix
R = blkdiag(sigma_e1^2,sigma_e2^2);  % measurement noise covariance matrix
F = [a 0 1-a; 0 1 0; 0 0 1];
H1 = [1 0 0];        % slow sensor
H2 = [0 1 1];        % fast sensor with bias    
H = [H1;H2];
y = [y1;y2];
%ind = 1;           % only first measurement available
%ind = 2;           % only second measurement available
ind = 1:2;         % both measurements available
y = y(ind,:);      % pick out relevant parts
H = H(ind,:);
R = R(ind,ind);

The reason for the indexing stuff is to prepare for experiments with different sensors.

The famous Kalman filter consists of the following code snippet

% Initialise Kalman filter
P = 0*eye(3);       % influences initial convergence
xhat = zeros(3,N);  % storage for state estimates
% Run Kalman filter
for k = 1:N
   % time update
    if k>1, xhat(:,k) = F*xhat(:,k-1); end
    P = F*P*F' + Q;
    % measurement update
    K = (P*H')/(H*P*H'+R);
    xhat(:,k) = xhat(:,k) + K*(y(:,k)-H*xhat(:,k));
    P = P - K*H*P;
end
shat = xhat(3,:); % the sensor fusion estimate of s(t)

In the loop, the estimate LaTeX: \hat{x}(k|k-1)ˆx(k|k1) is first calculated, i.e. the best prediction of the full state vector LaTeX: x(k)x(k), obtainable using the measurements LaTeX: y(1),\ldots,y(k-1)y(1),,y(k1). This estimate is then updated, using the information available in the new measurement LaTeX: y(k)y(k), producing the estimate LaTeX: \hat{x}(k|k)ˆx(k|k). The first calculation of LaTeX: PP gives the  covariance matrix for the error LaTeX: x(k)-\hat{x}(k|k-1)x(k)ˆx(k|k1), the later updated LaTeX: PP gives the error covariance matrix for LaTeX: x(k)-\hat{x}(k|k)x(k)ˆx(k|k). Since we assume LaTeX: s=x_3s=x3, our estimate of LaTeX: ss is given by the third component of LaTeX: \hat{x}(k|k)ˆx(k|k).

To visualize the result and calculate the rms error we write

figure(1)
plot(t,shat,'k')
legend('True signal', 'y1','y2','Kalman filter')
rmserror = sqrt(mean((shat-s).^2))

The result in Octave should be something like the following figure and the rms-value should be 0.073866. (There will be some differences between Octave and Matlab, since they use different random number generators.)

sensor fusion result

The Kalman filter estimate is given by the black line, which can be seen to follow the blue line quite well, hence giving a good combination of the information available in the red and green signals.

Now close the plots and rerun the filter with only measurement LaTeX: y_1y1 available to the filter. This is done by

close all
ind = 1;
y = y(ind,:);     
H = H(ind,:);
R = R(ind,ind);

followed by the Kalman filter code above. Plotting s, y1 and shat should produce a figure similar to the following

estimate from y1

The black curve is the estimate produced by the Kalman filter using only the red measurement LaTeX: y_1y1. It can be seen that the estimate now is given by a high-pass filtered version of LaTeX: y_1y1. Not using signal LaTeX: y_2y2has resulted in a worse estimation, illustrated by the higher rmsvalue.

Q. What is the new rmsvalue? You will need this information when answering the algorithm quiz later.

The code we have used above is included in the file Download fusionexample.m

.

Extra: Note that we have provided the Kalman filter with a crude inexact model of the signal LaTeX: ss. We can therefore not claim that the Kalman filter produces  statistically optimal results. The code contains support for experimenting with optimizing parameters sigma_bias and sigma_signal. Change back to ind=1:2 so both y1 and y2 are used in the filter, and then try to improve the resulting filter performance by changing sigma_bias and sigma_signal.

The code also contains support for plotting the frequency response of the Kalman filter (you need to uncomment some stuff), i.e. how much of the two different signals y1 and y2 that are used at different frequencies. Investigate how the filter's frequency characteristics change when you change different parameters. You will for example notice that more noise on a measurement automatically leads to that the Kalman filter reduces the use of that measurement.

 

Particle filters

The Kalman filter described above is targeted for situations with linear relations between signals and where the noise is Gaussian distributed. It works well also in nearby situations, and there are variants of the KF designed to extend its usefulness. These go under names as Extended Kalman Filter (EKF), Unscented Kalman Filter (UKF) and similar.

However, if the probability distribution of the resulting probability densities for the signals we are estimating are multi-modal, typically in situations where there are several possible, quite distinct, hypotheses about what has happened, these methods do not give satisfactory performance. We should then use more powerful methods of nonlinear estimation. These are unfortunately typically also more computationally costly. A popular alternative is particle filters.

To get some common knowledge about PFs go through the survey material below.

Extra: If you want to be able to implement a particle filter yourself, you should also go through the nice material, provided by Andreas Svensson in Uppsala Links to an external site..

 

RANSAC

Random sample consensus (RANSAC) is an iterative method to estimate parameters of a mathematical model from a set of observed data that contains outliers, when outliers are to be given no influence on the values of the estimates.

Go through these presentations

 

Simultaneous Localisation and Mapping (SLAM)

When you need to localise yourself and build a map simultaneously, your need to use a SLAM algorithm. This is a popular area of research in autonomous systems.  Here are some survey material.

 

(Extra) More details for the interested

ORB_SLAM2

On your external disk / Virtual machine you have an example of a state of the art vision based SLAM system, ORB_SLAM2 that you can easily play with using a camera.

In one window run

roscore

In a second window start the camera service

rosparam set video/device_id 1
rosrun cv_camera cv_camera_node

Now we need to tell ORB SLAM about the camera

cp ~/software/ORB_SLAM2/Examples/ROS/ORB_SLAM2/Asus.yaml MyCam.yaml

Edit MyCam.yaml with the parameters for the camera calibration

ORB_SLAM expects the image to be published in /camera/image_raw so we need to republish it

rosrun image_transport republish raw in:=/cv_camera/image_raw out:=/camera/image_raw

Now run ORB SLAM

~/software/ORB_SLAM2/Examples/ROS/ORB_SLAM2/Mono ~/software/ORB_SLAM2/Vocabulary/ORBvoc.txt MyCam.yaml

 

More Bonus material on Algorithms

Radio-based SLAM where a map of the local radio environment is built and phase-based positioning with high accuracy is achieved is an exciting recent area of research (an opinion totally unrelated to the fact that the author of this paragraph was a supervisor of the early work presented in the following thesis). For an introduction to the ideas and an analysis of fundamental limits of performance see

More bonus material