Localization

Objectives

  • Define the localization problem
  • Describe the two main setups, "infrastructure" vs "uninstrumented"
  • Describe different use cases, "global localization", "tracking", "kidnapping"
  • Basic understanding of the Kalman filter and the particle filter (no proofs, no need to be able to implement) including strengths and weaknesses.
  • Describe how GPS works and the limitations (there must be some otherwise there would be no point in talking about localisation...)

 

Material to study

I suggest that you start from this Canvas page and refer to SHoR when/if needed. 

The Springer Handbook (SHoR), Chapter 35 (25 in 1st edition) describes the Kalman and particle filter and GPS is described in section 29.5 (20.5 in 1st edition).

 

     

    Introduction

    Localisation is the process of estimating the position of the robot. The localization process typically contains two steps. In the first you make use of a motion model to predict how the robot moves based on either control inputs or encoder readings (odometry). In the second step you make corrections to you position based on information from exteroceptive sensors. For those of you that sailed before there was GPS you know that when you sailed over open water you needed to pay attention to your heading and the speed and keep track of you position that way. Then when you came close to shore on the other side you could use landmarks there (for example light houses) to correct your position.

    Although we will focus entirely on localisation on this page, everything we describe can be generalised to estimation of unknown variables in a dynamic system. 

     

    Why bother, there is GPS!!?!?!

    Yes there is GPS and this has enabled some applications which we could not even have imagined before. Many of them are connected to the location services that your phone provides. It is the work horse when we try to find our way in a new city using some navigation software such as Google maps. 

    The are several limitations of GPS though and localisation is therefore not a solved problem. Some of the limitations are mentioned below

    • At least four satellites need to be in line of sight.
      • This rules out almost all indoor applications and places like tunnels and mines. 
      • It also makes urban setting a bit challenging because of the tall buildings.
    • It has limited accuracy. Normal GPS is accurate to about 5-10m and the error varies with many factors. This is still fantastic in many but not all applications. Imagine driving an autonomous car only on this. You would not know in which lane your were.
    • The update rate is relatively limited (a few Hz).
    • It can be turned off and then everything that relies only on that would fail.

     

    Problem statement

    The problem of localisation is that of estimating the pose, i.e. position and orientation, given measurements. If we only have access to proprioceptive sensor information we can only do dead reckoning (compare example from above). The position estimate from a localisation system relying on dead reckoning will drift and the error will increase without bounds. With exteroceptive sensors we can relate the measurements to a known model (or map) of the environment and get information about the absolute position and in this way bound the error.

    Formalising the problem

    Introduce the following notation

    • xk is the pose at time k (this is the state of the system)
    • uk is the odometric information, control signal or something else that drives the dead reckoning (aka prediction)
    • zk is a measurement from the environment M.

    One typically assume that the relations between these variables is captured by the Bayesian network (BN) below.

    BN_Localization_incl_Map.png

    A BN is an example of a probabilistic graphical model (PGM). Remember that the edges in a Bayesian network denote that one variable (which the edge points away from) has direct influence over another variable (that the edge points at). This means that the next state, xk+1, depends on the previous state, xk and the control signal, uk+1. The graph also shows that we assume that the measurements, zk, is fully determined by the state at that time, xk and the model of the environment. Also remember that variables that are known/measured are drawn as shaded. That is, in the localisation problem the only variables that are considered unknown are those connected to the pose of the robot.

    BN_Localization.png

    In localisation littérature this dependency on the model is often not made explicit and we therefore typically consider the graph shown below to describe the problem. 

     

    The localisation problem is defined as the problem of estimating the pose, xi using all the control inputs up until that time U1:k={u1,u2,u3,...,uk} and all the measurements Z1:k={z1,z2,z3,...,zk}. Almost all approaches to localisation are in a Bayesian framework, and then we thus try to estimate p(xk | U1:k, Z1:k ).

     

    Steps

    The localisation process can be divided into two steps.

    • In the first step we perform dead reckoning or predication. This is most often referred to as the prediction step.
    • In the second step we use measurements to update/correct/adjust the estimated position. This is most often referred to as the update step.

    As explained above we can run only the prediction step but then the error will keep increasing. We often run the prediction step at a higher rate than the update step as the measurements, zi, often are available at a lower rate than ui.

    Use cases

    • Global localisation
      If the robot has no prior knowledge about the position when the localization system its turned on it has to find its position under global uncertainty. This is often made hard because of symmetries, that is, several places that look similar to the sensor. This results in multiple possible locations. In the video below (look at the first 1min) you can think of the blue dots (light blue top right or dark blue bottom right) as hypotheses about where the robot can be. Not how initially many positions are possible and as more evidence is gathered hypotheses are gradually disconfirmed. Note that one has to account for uncertainty in this process so that an outlier once in a while in the measurements do not confuse the robot.  
      Global Localization of a Mobile Robot Links to an external site.Global Localization of a Mobile Robot
    • Tracking
      When the position of the robot is known the problem of localisation is that of estimating how the robot moved from one time step to the next. Dead reckoning over such distances will be small and it will typically be much easier to associate measurements to the part of the environment that gave rise to them. That is, the so called data association problem is simpler.
    • Kidnapped
      If you move the robot without letting it know about it, for example, by lifting it up you create a very challenging situation for the robot. This is referred to as "kidnapping". The challenge here is that first the robot has to understand that it has been kidnapped and then find its position again. In the video above the robot is kidnapped (moved to a new location) shortly after 1min. Once this is detected the robot triggers global localisation again.

    Setups

    Infrastructure

    GPS is an example of an infrastructure approach to localisation. We fit the environment with satellites which solves the localisation problem ina. way that makes having a model of the environment not needed. Other examples are motion capture systems where high end cameras are placed in the room looking at markers placed on the robot. This can give a position accuracy of a few millimeters and update rates of hundreds of Hz. Another example of this is to use ultra wide band (UWB) radio nodes as "satellites" and receivers on the target to position as in Bitcraze's local positioning system.

    https://www.youtube.com/watch?v=AwjoUTvIBvA Links to an external site.

    Other example is to use Bluetooth nodes in the ceilings as they do at the Stockholm International Fairs in Alvsjö. By measuring the received signal strength and which nodes are in range you can calculate a rough estimate of the position enough to tell which stand you are close to for example. The same thing can be accomplished with WiFi signals. The pay toll systems in many countries allows you to fit an RFID transponder in your car which can be read when you pass by. This is also a form of localisation.

    A light version of the infrastructure approach is to fit the environment with simple landmarks of some sort which the robot can detect with it sensors. Examples of this is reflective stripes of tape that a laser scanner detects in many of the current automatically guided warehouse trucks.

    Uninstrumented

    In the uninstrumented approaches the robot only makes use of onboard sensors and a model of the environment. This is currently the best hypothesis for how it would have to work for a service robot in someones home. It would be not be possible to to equip all home environments with infrastructures giving enough accuracy to operate autonomously. 

    Many car manufactures allow there cars to drive only in regions where they have first built a map of the environment. The car then uses onboard sensors and matches the data from these to the stored map. GPS would be used to alleviate the global localisation problem in this case. 

     

     

    Representing the uncertainty and two common algorithms

    You need to be able to represent the uncertainty in the estimate, i.e. p(xi | U1:i, Z1:i ), and to incorporate new information. The two main methods for this is the (Extended) Kalman Filter and the Particle Filter. You should know, at least in rough, terms how these work. The Kalman filter represents the state estimate (the pose of the robot) as a Gaussian (normal distribution) and typically allows for a very efficient implementation that allows for estimating high dimensional state spaces. The main limitation is that only uni-modal distributions can be captured. The particle filter maintains the estimate as a set of samples from the distribution. Given infinitely many particles such a representation can represent any distribution arbitrary well, importantly including multi-modal distributions. When power is turned on for the robot and it starts trying to determine where it is, it often ends up with ambiguous answers because of symmetries in the environment. It could be that it know that it is in one of many rooms that have the same size, but not which room. This calls for a representation that handles multi-modality. This is one of the advantages with the particle filter. Another is that it handles non-linearities well. A disadvantage is that it suffers from the curse of dimensionality and the number of particles needed is exponential in the number of dimensions of the state space we try to estimate in.

    Now look at the following material