Description Assignment 1: ROS and Sensing

Objectives

The objectives of the first assignment is, firstly, to familiar yourself with common software tools used when working with autonomous systems; and, secondly, to understand common sensors a robot uses to understand the world around itself as well as understanding what itself is doing.

Prerequisite

Before starting, make sure that you have done the assignments setup.

 

Submission

There are two submissions connected to this assignment

 

Background

ROS

This course will make heavy use of the Robot Operating System or ROS. You will use it as a tool in the course and the programming you will do in the ROS environment will be in python code in prepared skeletons that will protect you from most of the "ROS stuff". However, if you did not use ROS before but might in the future please take the chance to get to know it more in the course by exploring a bit more. There are many very well written tutorials that will guide you through the basics at the ROS webpage Links to an external site..

We will use ROS2 rather than ROS. ROS is probably still the dominating version of ROS in use, but the future is ROS2 and many companies already use ROS2. Besides keeping an eye to the future, those of you that have not made the transition from ROS yet will get to experience ROS2.

 

Sensing

Besides using ROS, this first assignment is about sensing. If you are new to the area you might want to start with the slides about sensors from Download Isaac Skog

and Download Michael Felsberg from the first course round.

We will focus on the following sensors in this course

  • IMU
    • IMU stands for Inertial Measurement Unit and typically contain
      • Gyro: measures rotation speed around three axes
      • Accelerometer: measure linear acceleration (including gravitational force) along three axis
      • Magnetometer: measures the magnetic field as a 3D vector
    • These individual sensors are often used on their own also
      • Accelerometers are great for detecting the direction of gravity and help your phone tell how to orient the screen. They also help tell when to fire the airbag.
      • Gyros are used to estimate the orientation which is very important if you want to use an accelerometer to estimate motion. Given that the accelerometer cannot tell the reaction force from gravity from forces generated by accelerations you need to remove gravity and thus know the orientation very accurately.
      • The magnetometer is great at giving you an absolute orientation measurement. What you get from a gyro will drift (integrate rotation speed). The challenge with a magnetometer is instead that there is a lot of other things that generate magnetic field than the Earth so you will most likely not be able to tell the direction of North reliably. If you do not move much you will be able to tell if you have rotated though and this is very useful for example in VR headsets.
    • How MEMS Accelerometer, Gyroscope and Magnetometer Work Links to an external site. (You need only watch first 3 minutes. We will not work with the Arduino board)
    • Download Short calculation of impact of a gyro bias
  • GNSS ("GPS")
  • Camera
    • Sometimes referred to as "RGB camera" to distinguish it from cameras monochrome cameras and cameras that also give depth)
    • The camera counts the photons that hit the pixels in the image sensor Links to an external site. and outputs an image often represented as a 2d array.
    • The image sensor often use a Bayer filter Links to an external site. and do not actually capture R, G and B in each pixels but only one of them and then interpolates to get R, G and B values for each pixel.
    • Depending on if you camera has a global shutter or a rolling shutter Links to an external site. (see this special version) will effect the price and performance of the camera. In a global shutter camera all pixel values will be acquired at the same time whereas in a rolling shutter camera each row in the image is read after the other. The type of camera is important for example when you have fast motion.
    • To be able to measure things with the camera you need to have a model of it. The most common camera model is the pinhole camera model Links to an external site..
    • Camera calibration Links to an external site. is also required for many tasks.
      • The intrinsic camera parameters include the focal length which is the main parameter when it comes to projecting a point in the world to the image sensor.
      • A real camera does not behave as a perfect pinhole camera because of imperfections and lense distortion. This we also need to calibrate for. By compensating for this we can use the pinhole camera model.
      • The extrinsic camera parameter tells us where the camera is with respect to the scene. In practical applications we also calibrate to get the transformation between where the camera (and other sensors) is on the car/robot/etc so that we can relate what is seen by one sensor to another for example.
  • RGB-D sensor
    • Depth information makes many computer vision tasks much simpler Links to an external site..
    • When Microsoft released the Kinect around 2010 it revolutionized the world of sensing. From having relied on complicated stereo setups to get depth information, depth information was given "out of the box" from the sensor.
    • There are three main methods for getting depth in an RGB-D camera:
    • Since the depth information is often acquired with a different sensor (IR) than that which gives you the RGB image you need to make sure that your sensor is calibrated, so that you know which color pixel a certain depth value corresponds to.

Task 1.1

Simulated TurtleBot

Launch Webots and RViz using the course supplied launch file:

ros2 launch assignment_1 turtlebot_simulation.launch.py

Are you not able to run the above? Maybe you built the code for the first time and you try to run it directly in the same terminal. This will not work unless you source the setup file again (see end of Assignments Setup), or alternatively close the terminal and open a new one. Why? When you run the setup script it looks for all packages. So when you ran it last time the new package, assignment_1, did not exist yet.  If things work as expected you should be able to press the TAB key to a listing for the things to run and launch. Try typing ros2 launch ass and then press TAB.

Launch keybord teleoperation controller such that you can control the TurtleBot in the simulated environment:

ros2 run teleop_twist_keyboard teleop_twist_keyboard

If your terminal with the keyboard control node is in focus, you will be able to control the simulated robot. Move around a bit and collect some data. In the terminal it will say how you can control the robot. You will be able to open the doors to the different rooms by running the robot into them. In Webots, you can also drag/move the doors (and other objects) open by holding down the alt and then left clicking with your mouse on the object you want to move and then move the mouse curse in the direction you want to apply force to the object.

 

Real Robot

Now you have seen simulated sensor data in Webots. We want you to compare that to real sensor data from a real robot equipped with an Intel Realsense D435i. A useful feature in ROS is the ability to be able to record sensor data from a real robot and play it up as it was experienced live. These recordings are referred to as rosbags Links to an external site.. Rosbags are highly valuable when developing algorithms and methods, as they allow you to quickly, easily, and cheaply test your software on real data that the robot should be able to deal with. However, as the data is a recording it is not possible to effect it; it will just come as it was recording. This limits the use of it slightly and it is therefore good to complement it with a simulator.

First you will need to download the rosbag containing the captured data from a real robot: https://kth-my.sharepoint.com/:u:/g/personal/dduberg_ug_kth_se/Ee9Lob3Ynl5Grj51rD7F2WEBFjPIKQNMsLmD8ln529YEnQ?e=v34Bb4 Links to an external site.

Unzip the file after you have downloaded it. Next, start RViz with the supplied config file:

rviz2 -d ~/ros2_ws/src/wasp_autonomous_systems/assignment_1/rviz/real_robot.rviz --ros-args -p use_sim_time:=true

In a new terminal play the rosbag:

ros2 bag play --read-ahead-queue-size 100 -l -r 1.0 --clock 100 <DIR>/real_robot

Where <DIR> is the directory where the real_robot directory is located.

To read what the optional arguments --read-ahead-queue-size, -l, -r, and --clock, do, you can run ros2 bag play -h.

Video Demonstration

Below is a video demonstrating what you should see in Task 1.1. Note that your task starts after you have been able to view the data. You task is to study the sensor data closer and identify properties of the different sensors.

(if audio or video quality is low, try to adjust your settings).

 

Task 1.2

Autonomous driving is making a lot of progress in recent years. In this second task, you will look at a rosbag recording from a car. The dataset is one of many from Kitti Links to an external site.. The car has a LiDAR and stereo cameras that you will be able to see the data from in RViz.

First you will need to download the rosbag: https://kth-my.sharepoint.com/:u:/g/personal/dduberg_ug_kth_se/Eau4cvqhsaFHn8LSCemrk1sBVQ27JtrrNVWxOBp5JEIkLA?e=TIKkx9 Links to an external site.

Unzip the file after you have downloaded it. Next, start RViz with the supplied config file:

rviz2 -d ~/ros2_ws/install/assignment_1/share/assignment_1/rviz/kitti.rviz --ros-args -p use_sim_time:=true

In a second terminal run to see a car in RViz when the rosbag is playing:

ros2 launch wasp_autonomous_systems kitti_car.launch.py

In a third terminal play the rosbag:

ros2 bag play --read-ahead-queue-size 100 -l -r 1.0 --clock 100 <DIR>/kitti

Where <DIR> is the directory where the kitti directory is located.

Video Demonstration

Below is a video demonstrating what you should be seeing.

 

Task 1.3

GNSS (GPS) has changed the way we and our autonomous systems navigate. For many applications it solves the localisation problem (where are we?). We humans have largely stopped reading maps and walk look to our phones to tell us where to go (some research Links to an external site. suggest that this might be a bad thing). 

In assignment 2 we will look closer at combining GPS with IMU data. The video below shows one example of the recent use of GNSS. By putting a receiver on each running in an orienteering competition you can follow the race live on the map. It has turned what used to be a largely unobservable event to being able to track progress live on the map. Take a look at the video below. Discuss how well GNSS tracks the position of the runners. Does the accuracy match your assumption? Assume that runner was instead a robot, could you drive it safely using the GNSS as a source of position information?

https://www.youtube.com/watch?v=TDYds447vO8 Links to an external site.

 

Extra task (if you miss the deadline)

Collect data from a GPS receiver when you walk around and plot your trajectory overlayed on a map. It is OK to do manual registration between the map and the trajectory.

Submission: Include your results in the report by extending the template appropriately.