We want to be able to control the robot’s position, but to do this we need to be able to tell the robot how to move. The control signal we have to move the robot is a motor PWM command, which effectively commands how much effort we wish the motors to produce. However, it would be most convenient to be able to command the velocities of the robot from a higher level in order to move the robot in the environment. One simple way to control the robot speed is to control the speed of each wheel. This can be accomplished in a few different methods. One method is to set the motor PWM based on a function that you determine will produce the desired wheel speed. This is called an open loop because there is no feedback. This is the least accurate way of controlling the speed. A better method would be to use encoder data to control the wheel speed using a feedback controller. The goal of this part of the lab will be to build an increasingly sophisticated robot controller, and at the end take quantitative data to compare them. Additional hints/instructions can be viewed HERE.
Follow the instructions below to characize your motors. This will involve building the firmware, flashing the pico, and taking the output and saving this into a .csv
file (or google doc). You can use this data to run linear regression.
Required For the Report:
Please include discussions on:
Within mbot-controller-pico/src/mbot.c
, create two controllers, one to control each wheel velocity using your open loop calibration, and one to control each wheel’s speed using PID controllers. The controllers should keep the wheels moving the desired speed so the robot moves in as straight a line as possible without heading correction, but likely this won’t be perfect. Observe these controllers and compare their behavior and then either choose one of them or use these controllers as a starting point for a more sophisticated controller.
Other features you could consider adding/changing for your controller:
You can modify the previous python scripts from the Intro & Setup assignment to generate velocity commands to test your controller. Remember, this will require running the shim and timesync binaries.
Required For the Report:
Please include discussions on:
Implement the odometry functions in mbot-controller-pico/src/mbot.c
to calculate the robots position and orientation and enable dead reckoning with wheel encoders only. You should test your implementation by manually moving the robot by known distances and turning by known angles. If you are unsatisfied with the accuracy of the odometry, try a calibration procedure such as UMBark.
Required For the Report:
Add another macro GYRODOMETRY to mbot-controller-pico/src/mbot.h
that can either be set to 0 or 1. Modify your odometry code to check for the value of this variable. When it is false, run the standard odometry. When it is true, implement the gyro sensor fusion.
Either implement the “gyrodometry” algorithm discussed in lecture or implement your own algorithm for fusing the gyro data with odometry data for estimating the heading of the Mbot.
Hint: The preferred way to read \(\Delta\theta_{gyro}\) is by taking the difference between consecutive samples of tb_angles.
Required For the Report:
Note: If you feel like you can completely ignore the IMU for heading information or alternatively completely ignore the odometry, you should describe this in your report, and include your experimental data to justify your decision.
Modify the robot frame velocity on top of the PID controller in mbot-controller-pico/src/mbot.c
For this, instead of the set point velocity being determined by the motor command, this will be determined by a another PID comparing the current measured velocity and the target.
This can then be tested by modifying the script botlab/python/step_test.py
to create different trajectories within Python. You can also test this further by developing the motion controller in Section 1.6.
Required For the Report:
Note that, the ultimate goal is for you to have a working controller that will control your bot in a precise way, so you may need to experiment with several controllers and see how they behave and choose one that behaves the best for your robot.
Modify the python script botlab/python/step_test.py to create a new timed drive for executing a square. Build off of this script, as it has a slightly different format for sending motion commands than the Python motor commands in the Teleop Pico introduction assignment.
Required For the Report:
Finally, design your motion controller for driving between waypoints of type pose_xyt_t
and implement it in botlab/src/mbot/motion_controller.cpp
in the botlab repository. This program will run on the RPi. Your motion controller should take in messages of type robot_path_t on the channel CONTROLLER_PATH and execute the trajectory until the robot reaches the final waypoint.
We have provided a template for sending pose commands through the program botlab/src/mbot/drive_square.cpp
. You can use this script to tune the PID of the motion controller, as well as copy it as a template for giving waypoints of an arbitrary path.
You may need to tune the PID controller within botlab/src/mbot/motion_controller.cpp
You can also tune the limits for the forward velocity and angular velocity to balance the speed and accuracy of your motion controller. Modify these values at the bottom of botlab/src/mbot/motion_controller.cpp
Required For the Report:#
During the SLAM part of the lab , you will build an increasingly sophisticated algorithm for mapping and localizing in the environment. You will begin by constructing an occupancy grid using known poses. Following that, you’ll implement Monte Carlo Localization in a known map. Finally, you will put each of these pieces together to create a full SLAM system. Much of this can be initiated using only logs and when you are ready to use the actual robot.
To run the botgui:
pi@raspberrypi:~ $ cd botlab
pi@raspberrypi:botlab $ source setenv.sh
pi@raspberrypi:botlab $ ./bin/botgui
For this phase, we have provided two LCM logs containing sensor data from the MBot along with ground-truth poses as estimated by the staff’s SLAM implementation.
data/convex_10mx10m_5cm.log
: convex environment, where all walls are always visible and the robot remains stationary (use for initial testing of algorithms).data/drive_square_10mx10m_5cm.log
: a convex environment while driving a squaredata/obstacle_slam_10mx10m_5cm.log
: driving a circuit in an environment with several obstaclesTo play back these recorded LCM sessions on a laptop (with Java), you can use the lcm-logplayer-gui
$ lcm-logplayer-gui data/convex_10mx10m_5cm.log
Note: you can also run the command line version, lcm-log-player, if the system does not have Java. In the GUI version you can turn off LCM channels with a checkbox. The same functionality is available in the command line version, check the help with
$ lcm-logplayer --help
You will need to turn off the LCM channel SLAM_MAP to visualize the results of your mapping implementation.
Similarly, to record your own lcm logs, for testing or otherwise, you can use the lcm-logger
$ lcm-logger -c SLAM_POSE_CHANNEL my_lcm_log.log
This example would store data from the OPTITRACK_CHANNEL in the file my_lcm_log.log
. With no channels specified, all channels will be recorded over the interval.
The laser rangefinder beam rotates in 360 degrees at a sufficiently slow rate such that the robot may move a non-negligible distance in that time. Therefore, you will need to estimate the actual pose of the robot when each beam was sent in order to determine the cell in which the beam terminated. To do so, you must interpolate between the robot pose estimate of the current scan and the scan immediately before. Remember that the pose estimate of the previous SLAM update occurred immediately before the current scan started. Likewise, the pose estimate of the current SLAM update will occur when the final beam of the current scan is measured. We have provided code to handle this interpolation for you, which is located in src/slam/moving_laser_scan.hpp
. You only need to implement the poses to use for the interpolation.
Implement the occupancy grid mapping algorithm in the Mapping class in slam/mapping.cpp/.hpp
.
An occupancy grid describes the space around the robot via the probability that each cell of the grid is occupied or free. We will use a grid with at most 10 cm cells, whose log-odds values are integers in the range [-127,127]. To perform the ray casting on your occupancy grid you might want to implement Breshenham’s line algorithm as described in class lecture.
For this Task, use the ground-truth poses in the provided log files to construct occupancy grid maps of each one. Using the poses from the log file is handled by specifying the –mapping-only command-line option when you run the slam program.
Required For the Report:
As discussed in lecture, Monte Carlo Localization (MCL) is a particle-filter-based localization algorithm. Implementation of MCL requires three key components: an action model to predict the robot’s pose, a sensor model to calculate the likelihood of a pose given a sensor measurement, and various functions for particle filtering including drawing samples, normalizing particle weights, and finding the weighted mean pose of the particles. You will implement MCL using odometry, laser scans, and an occupancy grid map.
In these tasks, you’ll run the slam program in localization-only mode using a saved map. Use the ground-truth maps provided with the sensor logs. You can run slam using:
./slam --localization-only <filename>
.
To test the action model only, you can run in action-only mode with localization-only turned on:
./slam --localization-only <filename> --action-only
.
Implement an action (or motion) model using odometry or wheel encoder data. The skeleton of the action model can be found in slam/action_model.h/.cpp
.
Refer to Chapter 5 of Probabilistic Robotics for a discussion of common action models. You can base your implementation on the pseudo-code in this chapter. There are two action models that are discussed in detail, The Velocity Model (Sec. 5.3) and the Odometry Model (Sec. 5.4).
Required For the Report:
Implement a sensor model that calculates the likelihood of a pose given a laser scan and an occupancy grid map. The skeleton of the sensor model can be found in slam/sensor_model.h/.cpp
.
Refer to Chapter 6 of Probabilistic Robotics for a discussion of common sensor models. You can base your implementation on the pseudo-code in this chapter.
Finish implementing the particle filter contained in slam/particle_filter.h/.cpp
.
Refer to Chapter 4 of Probabilistic Robotics for help in implementing your particle filter.
Hint: In case of a slow performance of your sensor model, consider increasing the ray stride in the MovingLaserScan call.
Required For the Report:
You have now implemented mapping using known poses and localization using a known map. You can now run the slam program in SLAM mode, which uses the following simple SLAM algorithm:
NOTE: The above logic is already implemented for you. Your goal for this task is to make sure your localization and mapping are accurate enough to provide a stable estimate of the robot pose and map. You will need good SLAM performance to succeed at the robot navigation task.
Required For the Report:
obstacle_slam_10mx10m_5cm.log
. Use this to estimate the accuracy of your system and include statistics such as RMS error etc.Using the SLAM algorithm implemented in part 1, you can now construct a map of an environment using the MBot simulator. Now you will implement additional capabilities for the MBot: path planning using A* and autonomous exploration of an environment.
The robot configuration space is implemented in planning/obstacle_distance_grid.h/.cpp
. Run obstacle_distance_grid_test
and check if your code passes the three tests. The botgui already has a call for a mapper object from which it can obtain and draw an obstacle distance grid. You can view the generated obstacle distance grid by ticking “Show Obstacle Distances” in botgui.
Write an A* path planner that will allow you to find plans from one pose in the environment to another. You will integrate this path planner with the motion_controller from Phase 2 to allow your MBot to navigate autonomously through the environment.
For this phase, you will implement an A* planner and a simple configuration space for the MBot. The skeleton for the A* planner is located in planning/astar.h/.cpp
Your implementation will be called by the code in planning/motion_planner.h/.cpp
. You can test your astar_test code using botgui, check astar_test
code for appropriate arguments.
Once your planner is implemented, test it by constructing a map using your SLAM algorithm and then using botgui to generate a plan. Right-click somewhere in free space on your map. Your planner will then be run inside botgui and a robot_path_t will be sent to the motion_controller for execution.
astar_test
& astar_test_files
can be used to test the performance of your A* planner.
Required For the Report:
astar_test_files
, report statistics on your path planning execution times for each of the example problems in the data/astar
folder – you simply need to run astar_test_files
after implementing your algorithm. If your algorithm is optimal and fast, great. If not please discuss possible reasons and strategies for improvement.Up to now, your MBot has always been driven by hand or driven to goals selected by you. For this task, you’ll write an exploration algorithm that will have the MBot autonomously select its motion targets and plan and follow a series of paths to fully explore an environment.
We have provided an algorithm for finding the frontiers – the borders between free space and unexplored space – in planning/frontiers.h/.cpp
. Plan and execute a path to drive to the frontier. Continue driving until the map is fully explored, i.e. no frontiers exist. Once you have finished exploring the map, return to the starting position. Your robot needs to be within 0.05m of the starting position. The state machine controlling the exploration process is located in planning/exploration.h/.cpp
.
Required For the Report:
For the competition task, you will be asked to localize on a map where you do not know your initial position. This will require initializing your particles uniformly in open space on the map. Then you will need a way to move from this unknown location without hitting obstacles and without the use of your A* planner in order to update your localization estimate until your distribution is narrow enough to know where you are.
Required for report: