Module2 - mtrl - Motion Planning
Most of the material has been put together by Björn Olofsson, with important input also from Jana Tumova and Joshua Haustein. Thanks to you.
Learning Outcomes
The objective of this part of Module 2 is to get an introduction to motion planning and some common algorithms and computer tools for performing this kind of planning in practice. The module also intends to describe how motion planning could be a component in an autonomous system and its relation to the other components.
We have included more material in this module than we realize most of you will have time to read. Use your own judgement and interest to decide which parts of the material marked in red to read.
Motion Planning and its Relation to Autonomous Systems
Motion planning is an essential component in many autonomous systems. Consider, for example, an autonomous car driving in an urban environment or UAVs performing different tasks. In these systems, a component is needed that computes a plan for how the desired tasks should be executed. The planner is often divided into different parts, doing planning at different levels. In the module on Task Planning (Module2 Task Planning), different strategies to compute a plan for solving a task were investigated. The motion planning component often acts based on the results from a high-level task planner. In the UAV example, the task planner could compute how the specific vehicles should move from one place to another and in what sequence (without details on how each vehicle should move in-between while taking physical constraints and obstacle avoidance into account). It is then the task of the motion planner to do a more detailed plan on how each vehicle should move from its start position to the desired goal position, considering all required constraints on the motion and avoiding obstacles and other vehicles in the same area, for example.
Motion Planning Criteria
In very general terms, the motion planning task could be described as transferring the state (such as position and velocity) of a moving system from an initial state to another desired goal state. During the motion, there are often certain criteria that need to be fulfilled for feasibility of the plan:
- The motion needs to adhere to the system dynamics, encoded by a model describing the outcome of a certain action.
- Obstacles, humans, and other vehicles (that could be stationary or time-varying) need to be avoided.
- Physical limits on control inputs (for example, motor power in a car or maximum torque in a robot motor).
- Constraints on internal states (for example, velocity or available friction between the tires and the road).
- Path constraints.
In addition to these criteria, there are sometimes additional specifications in terms of optimality of a computed motion plan. These criteria could be in terms of minimizing the time or path length required for performing a certain task or minimizing the energy consumed.
Path and Trajectory Planning
Fundamental concepts in motion planning are paths and trajectories. A path is a parametrized curve in space defining the geometric motion of the system. A trajectory is time-parametrized, thus specifying the relation between path and time during the course of the motion (or equivalently the velocity). The motion planning problem is sometimes divided into a path planning stage (where the system dynamics sometimes are ignored) and a subsequent trajectory planning stage for tracking the computed path (optional reading Section 14.6 Links to an external site. in "Planning Algorithms" by LaValle for further details on decoupled approaches to motion planning). The intention is to reduce the computational complexity of the complete task. With increased computing power, direct integrated approaches to motion planning are also tractable, ultimately leading to higher performance.
Reactive Sensor-Based Motion Planning
Motion planning in uncertain, unstructured, or unknown environments needs to be reactive. Inputs from sensors provide information about the ego system and the environment (see Module 1 - Sensing). In a self-driving car, this could be LIDARs and IMUs providing data to the motion planning and control system. Based on the data acquired from these sensors, the path and the associated trajectory often need to be replanned online as a results of new information arriving. Considering that this could be a system moving with potentially high velocity (such as a car driving on a highway), the replanning in many cases needs to be done with real-time computational constraints and there must always be a trajectory for bringing the system to a complete stand still for safety. A YouTube video illustrating a motion planner acting in an autonomous vehicle, comprising many of the features described in the text above, is available here: A* in Action - Artificial Intelligence for Robotics
Links to an external site.
Motion Planning and Feedback Control
Under ideal conditions, the path or trajectory computed by the motion planner will lead exactly to the desired behavior of the autonomous system when executed. However, because of uncertainty in the model and the parameters of the system as well as the environment, the motion planned often needs to be accompanied by feedback control (see Module 2 - Control) in order for the execution to be robust and lead to the intended motion. There is also a trade-off between when to do replanning and when to let the feedback controller handle local deviations from the planned path and trajectory.
Suggested Reading Material
The following book chapter and survey article provide an introduction, an overview, and many examples of motion planning in different fields:
- Chapters 1.1-1.4 Links to an external site. in "Planning Algorithms" by LaValle, (~20 pages easy reading)
-
Chapters I-IV
Links to an external site. in "A Survey of Motion Planning and Control Techniques for Self-driving Urban Vehicles" by Paden et al. (16 pages, extra reading)
Algorithms and Methods
We will investigate three common classes of motion planning algorithms. There are many variants and extensions of the fundamental algorithms discussed. The provided references provide more details on these aspects.
Motion Planning Based on Graph Search
One class of motion planning algorithms is based on graph search for establishing a path or trajectory for the system to follow. The first step in such methods is to establish the graph, with its edges and vertices, to be used for the search. Each vertex in the graph corresponds to a state of the system, and if there is a way to go from a particular vertex to another vertex, an edge is added in the graph. Once the graph is established, different methods are available for searching effectively in the graph to find a solution from the initial state to the desired final goal. Each edge could also be associated with a cost, for example the length of that path segment or the time to go from vertex A to vertex B. An introduction to graph search in the context of motion planning is provided in the following chapter:
- Chapter 2.2 Links to an external site. in "Planning Algorithms" by LaValle (12 pages, good general knowledge material)
The infinite-dimensional continuous state space of a vehicle or robot needs to be discretized for graph search purposes. The discretized state space is referred to as the state-space lattice, and the motion of the vehicle or robot is then planned on this lattice.
When considering motion constraints, for example, implied by the system dynamics, the possible state transitions could be defined by so called motion primitives. These primitives are typically defined and computed a priori and describe the possible transition from one state to another state. In the example of a ground-vehicle with a constraint that it cannot move sideways, this motion restriction is implicitly encoded in the motion primitives. The motion primitives should span a sufficiently large part of the possible motions to do from a certain state such that the resulting state space lattice is sufficiently dense, but there is a trade-off between the number of primitives and the computational time of the motion planning algorithm. The concept and computation of motion primitives is described in the following paper:
- "Generating near minimal spanning control sets for constrained motion planning in discrete state spaces Links to an external site." by Pivtoraiko and Kelly.
A graph is constructed based on the motion primitives, possibly incrementally as new sensor data become available. Each edge corresponding to a certain motion segment could also be associated with a cost. In the graph construction, obstacles in the vicinity need to be considered, thus preventing application of certain motion primitives. The solution to the motion planning problem is then obtained as a sequence of motion primitives through the graph; the particular primitives are the results of the graph search. The following presentation and article describe motion planning based on motion primitives and graph search:
-
Presentation "Search-based Planning with Motion Primitives
Links to an external site." by Likhachev. (Good to read if you want to do the optional computer exercise 2 later).
-
"Planning Long Dynamically-Feasible Maneuvers for Autonomous Vehicles
Links to an external site." by Likhachev and Ferguson (Optional reading, but we suggest you at least have a look at the DARPA challenge result section).
Suggested Additional Reading Material
-
"Lattice-based motion planning for a general 2-trailer system
Links to an external site." by Ljungqvist et al. (author is a WASP student from batch 1).
Sampling-Based Motion Planning Algorithms
Another set of motion planning methods is based on sampling in the state space of the system to make a plan for (such as a vehicle or robot). Here, we consider Rapidly Exploring Random Trees (RRTs), which is one common class of sampling-based algorithms. The following article by LaValle presents the foundations of the method:
- "Randomized Kinodynamic Planning Links to an external site." by LaValle and Kuffner (a classic in the field).
The method is based on incrementally constructing a tree in the state space, and here a summary of the key ideas is given: Random samples, possibly biased toward the goal state or toward other desirable directions, are generated iteratively. The point in the tree constructed so far that is closest to the random sample is used for expansion of the tree. From that closest point in the tree, the system is steered toward that random sample (in many cases by simulation of the model of the system to account for kinematic and possibly dynamic constraints) until an obstacle is detected or it comes sufficiently close to the random sample. The corresponding edge is added to the tree. This process is then repeated until a solution from the initial state to the desired goal state is found.
Suggested reading material for motion planning with RRTs is:
- Chapter 5.5 Links to an external site. (the fundamental ideas) in "Planning Algorithms" by LaValle.
- Chapter 14.4.3 Links to an external site. (more advanced with differential constraints) in the same book.
If optimal motion planning is to be performed, the RRT algorithm described in the previous paragraph could be extended to RRT*. RRT* aims to minimize the cost of the motion from the initial state to the desired goal state, by successive rewiring of the vertices in the constructed tree if a path that leads to a lower cost could be found when a new vertex is added. The RRT* algorithm is described in the following article, where it is also compared to RRT:
- Chapter 3.3.3, Algorithm 6, and Figs. 12-22 in "Sampling-based algorithms for optimal motion planning
Links to an external site." by Karaman and Frazzoli (good to read before doing computer exercise 1).
The RRT* algorithm is extended to planning for systems with differential constraints in the following paper:
- "Optimal kinodynamic motion planning using incremental sampling-based methods Links to an external site." by Karaman and Frazzoli.
Optimal Path and Trajectory Planning Using Dynamic Optimization
Yet another category of motion planning problems is conveniently formulated as optimal control problems and solved using dynamic optimization. This formulation enables comparably complex models of the system dynamics and constraints on the motion to be included in the formulation. In addition, a certain cost function (or objective function) to be minimized could be formulated within such a framework. Optimal control problems are often formulated in continuous time, since the model of the system is derived from physical principles. Occasionally, the optimal control problem could be solved with analytical methods, but often numerical optimization needs to be employed for computing a solution (see also Module 2 - Control for related approaches with MPC and convex optimization applied in receding horizon).
For transforming the infinite-dimensional optimization problem in continuous time to a finite-dimensional approximation that can be used in numerical optimization such as gradient descent or Newton methods, different approaches exist. Common methods are so called direct methods based on collocation or multiple shooting, which transform the optimization problem to a non-linear program that can be solved using different numerical solvers. For an introduction to dynamic optimization, see the following chapter:
- Chapter 2.3 in the Ph.D. Thesis "Numerical and Symbolic Methods for Dynamic Optimization Links to an external site." by Fredrik Magnusson.
In the chapter in the previous reference, an extension of Modelica (see Module 2 - Modeling and Simulation) called Optimica for formulating optimization problems based on Modelica models are also described. Dynamic optimization has for a long time been used for optimal motion planning for vehicles and robots of different types. A survey article comprising many different applications and the employed methods is:
- Survey article "Faster, Higher, and Greener: Vehicular Optimal Control Links to an external site." by Limebeer and Rao in Control Systems Magazine.
Computer Exercise 1: Sampling-Based Motion Planning Using RRT* in OMPL
In this computer exercise, we will use RRT* to plan the path of a particle vehicle or robot moving in a subset of R2. The Open Motion Planning Library (OMPL
Links to an external site.) implements several state-of-the-art algorithms for sampling-based motion planning. Here, we will use the RRT* implementation in OMPL to do optimal motion planning with various optimization objectives. We will base the exercise on the following tutorial in OMPL:
Look in particular at how the tree is built up during the planning process on the page above. The task is to move the particle-shaped vehicle or robot in a 2D world, where the position is defined by the coordinates (x,y). The vehicle is starting in
(0,0) and the desired goal is to reach
(1,1). A circular obstacle with radius 0.25 is placed at the coordinate
(0.5,0.5), in which the vehicle or robot is not allowed to move.
In the following, we will assume that the computer exercise is run on Ubuntu 16.04. Start by installing OMPL and associated tutorial files:
sudo apt-get install libompl-dev ompl-demos
Then create a new directory for the computer exercise (~/module2/wasp_as1_taskplan/ompl in the example below), go to that directory, and finally copy the file "OptimalPlanning.cpp" from the OMPL demo collection:
mkdir -p ~/module2/
cd ~/module2
git clone https://github.com/pjensfelt/wasp_as1_taskplan.git
cd ~/module2/wasp_as1_taskplan/ompl
cp /usr/share/ompl/demos/OptimalPlanning.cpp .
Open the file "OptimalPlanning.cpp" in a text editor. Spend 10 minutes scanning through the code. Since you do not know the syntax you should not aim to understand all the details, focus more on getting a grip on roughly how much work it is to setup a problem like this, for possible use in the future. Note in particular the last part of the code, where a collection of different objective functions defining the cost for the motion are specified. The objective function to be used is chosen by uncommenting the appropriate line around line 136. There are six alternatives:
pdef->setOptimizationObjective(getPathLengthObjective(si));
// pdef->setOptimizationObjective(getThresholdPathLengthObj(si));
// pdef->setOptimizationObjective(getClearanceObjective(si));
//pdef->setOptimizationObjective(getBalancedObjective1(si));
// pdef->setOptimizationObjective(getBalancedObjective2(si));
// pdef->setOptimizationObjective(getPathLengthObjWithCostToGo(si));
This objective function defines the cost to be the path length. Compile the code with the command in a terminal:
g++ OptimalPlanning.cpp -lompl -lboost_program_options -oOptimalPlanning
Perform the planning by the command (the solution is written to the file sol.txt):
./OptimalPlanning sol.txt
This command should result in some printouts in the terminal about the solution process, ending with the text (the path length in the example below could vary slightly):
Found solution of path length 1.50622.
On some systems (not the WASP VM) it seems that you need to run it as
./OptimalPlanning -f sol.txt
If you do this in the WASP VM you will end up with a plan file called -f.
The obtained solution can then be visualized using the Python code in the file "visualize.py Download visualize.py", This files is included in wasp_as1_taskplan in the ompl directory which you should have now. If not, download and save on your computer in the same directory as the file "OptimalPlanning.cpp". Illustrate the solution in the map of the 2D world with the command:
python visualize.py
We will now investigate how RRT* works in the considered example by altering the objective function and some other parameters for the planning algorithm. Remember to recompile, execute, and visualize the path planning after you have changed any parameter in the "OptimalPlanning.cpp" file:
g++ OptimalPlanning.cpp -lompl -oOptimalPlanning
./OptimalPlanning sol.txt
python visualize.py
or run them all on one line like this
g++ OptimalPlanning.cpp -lompl -oOptimalPlanning && ./OptimalPlanning sol.txt && python visualize.py
where the && between the commands makes the next command in line only run if the previous one did not result in an error.
Perform the following simulation experiments:
- There are six different objective functions predefined in "OptimalPlanning.cpp". By default, the following objective function is active:
pdef->setOptimizationObjective(getPathLengthObjective(si));
. Compute the solution for each of the objective functions by commenting the old, and un-commenting a new objective function in the file "OptimalPlanning.cpp". Do the computed paths make sense with respect to the chosen objective function? Which of the six objective functions gives the longest path length? Write down this path length, since it will be asked for in the quiz. - In the objectives called "BalancedObjective1" and "BalancedObjective2", there is a possibility to trade minimizing path length versus clearance to the obstacle. The default weights are 10.0 and 1.0, see lines 257-258 in the code specifying "getBalancedObjective1". Change the weighting parameters in the end of the file where the objective function "BalancedObjective1" is defined and then recompile and compute the planned paths (remember also to select the corresponding objective function out of the six available ones on lines 136-141 in the code). Try the combinations (1.0, 1.0) and (25.0, 1.0) and compare with the default (10.0, 1.0). Are the results in agreement with intuition? Notice which of these three combinations that gives the shortest path and remember that path length, it will be asked for in the quiz.
-
If time permits, you can also
- change the desired goal position and recompute the motion plan.
-
investigate is to let the solver have more time allocated for the solution by changing the value "1.0" (in the unit seconds) on the line:
ob::PlannerStatus solved = optimizingPlanner->solve(1.0);
Computer Exercise 2: Lattice-Based Motion Planning using SBPL
In this second computer exercise, we will use a lattice-based graph-search planner for computing the path for vehicles or mobile robots where the kinematic constraints imply certain restrictions on the motion that can be performed. We will use the Search-Based Planning Library (SBPL Links to an external site.), and base the computer exercise on the following tutorial in SBPL:
- "Planning with SBPL: Example in X, Y, Theta State Space Links to an external site." - Tutorial in SBPL.
The planning is for the considered vehicle or mobile robot defined by two translational coordinates (x,y) and the orientation
θ. The motions possible to perform with the specific vehicle or mobile robot are defined by the motion primitives; here we will investigate some different motion constraints and how they affect the resulting planned path. The vehicle is moving in an environment where the free space is described by a map that could be obtained from a SLAM algorithm (see Module 1 - Algorithms for further details). The graph search is based on an extension of the A* algorithm called ARA*, on which more information is available in this article (optional reading):
- "ARA*: Anytime A* with Provable Bounds on Sub-Optimality Links to an external site." by Likhachev, Gordon, and Thrun.
In the following, we will assume that the computer exercise is run on Ubuntu 16.04. Start by downloading, building, and installing SBPL with the commands (you can skip the first 3 if you already did the first exercise):
mkdir -p ~/module2
cd ~/module2
git clone https://github.com/pjensfelt/wasp_as1_taskplan.git
cd ~/module2/wasp_as1_taskplan/
git clone https://github.com/sbpl/sbpl.git
cd sbpl
mkdir build cd build cmake .. make
sudo make install
For the system to be able to find the SBPL library we need to modify the environment variable LD_LIBRARY_PATH by including /usr/local/lib. This is done by
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
You can ensure that this is done in all new terminal windows from now on by adding it at the end of your .bashrc file.
echo "export LD_LIBRARY_PATH=\$LD_LIBRARY_PATH:/usr/local/lib" >> ~/.bashrc
After completing the installation of SBPL, we will study a motion planning example. Start by creating a new directory in the "sbpl" directory:
cd ~/module2/wasp_as1_taskplan/sbpl
mkdir example
cd example
cp ~/module2/wasp_as1_taskplan/sbpl_ex/* .
You should now have two files the example directory, visualize_ex2.py and xytheta.cpp. If not, you can download the code in the file "xytheta.cpp" from the tutorial homepage:
and save it in a file "xytheta.cpp" in the "example" directory created above. Spend 10 minutes to read through the code in the file and investigate which parameters that can be altered. Start by setting appropriate values for the initial and final states of the system by changing the line:
setEnvStartGoal(env, .11, .11, 0, 35, 47.5, 0, start_id, goal_id);
to
setEnvStartGoal(env, 0, 0, 0, 6, 2, 0, start_id, goal_id);
We can now compile the program by the following command in a terminal:
g++ xytheta.cpp -lsbpl -oxytheta
The motion planner can now be executed by running "xytheta" and specifying the map and the motion primitives to be used. We will in this exercise use a default map and some files with motion primitives available already in SBPL. The default maps are available in the directory "sbpl/env_examples/nav3d/". The one we will use for this exercise is the file "cubicle-25mm-inflated-env.cfg". The files with motion primitives are available in the directory "sbpl/matlab/mprim/". If you would like to know more details on how the motion primitives are generated in SBPL, see the following page in the SBPL documentation (optional reading):
Open the file "sbpl/matlab/mprim/unicycle_noturninplace.mprim", which defines the allowed motions for a unicycle type of vehicle with constraints that it cannot change its orientation in its current place. In the file, a number of motion primitives are defined, parametrized in (a subset of) the start and final state for that motion segment (note that the motion primitives here are translationally invariant). Each primitive is also associated with a certain cost. We can now execute the motion planner for this vehicle by running the command:
./xytheta ../env_examples/nav3d/cubicle-25mm-inflated-env.cfg ../matlab/mprim/unicycle_noturninplace.mprim
The resulting planned path is written to the file "sol.txt". You can visualize the solution in the map using the Python code in the file "visualize_ex2.py Download visualize_ex2.py". You should have it already, if not, download it using the link and save the file on your computer in the same directory as you stored the other files for the exercise ("example"). Then illustrate the results with the command:
python visualize_ex2.py ../env_examples/nav3d/cubicle-25mm-inflated-env.cfg sol.txt
Perform the following simulation experiments:
- Experiment with different goal states by changing the values 6 and 2 (corresponding to the
x and
y coordinates, respectively) in the file "xytheta.cpp" on the line:
setEnvStartGoal(env, 0, 0, 0, 6, 2, 0, start_id, goal_id);
- Remember to recompile the program, execute it again, and plot the results using "visualize_ex2.py" after any changes in the "xytheta.cpp" file.
- Investigate how the resulting planned paths change when another vehicle or mobile robot type is chosen. This can be achieved by selecting different files with motion primitives. In particular, try the files "unicycle_backonlystraight.mprim", "pr2_all_2.5cm_20turncost.mprim", and "non_uniform_res0025_rad1_err005.mprim" in the directory "../matlab/mprim/". You do not have to recompile "xytheta.cpp" for this purpose, just change the file path argument when executing the motion planner:
./xytheta ../env_examples/nav3d/cubicle-25mm-inflated-env.cfg ../matlab/mprim/[FILE.mprim]
- Are the planned paths consistent with the intuition obtained from the defined motion primitives in the respective file? How do the motion constraints influence the resulting path?