Visit Support Centre Visit Support Centre Find a Distributor Find a Distributor Contact us Contact us

AMR Navigation Spotlight – Planning and Control

Blogs October 4, 2024

Welcome to the seventh blog in our AMR navigation spotlight series, where we’ll be focusing on how AMRs handle route planning and control. Click here to read the previous blog in the series, which discusses different sensors available for perception and mapping.

If you’ve been following along at home, you may well have a good amount of your autonomous mobile robots built by now. You’ll certainly have a good understanding about the different localisation options for your AMR, sensor placement considerations, timing and synchronisation, interfacing, and how your robot uses localisation data.

But how do you actually programme a robot to get from A to B? That’s what we’re discussing here. It’s time for an introduction to planning and control.

 

What is robot planning and control?

Don’t worry, we aren’t talking about any sort of AI uprising (though we can’t promise you it won’t happen). When it comes to AMR navigation, planning refers to the ability of your robot to plan a route from a starting point to an objective.

Robot control refers to how your AMR takes the route that is planned and uses it to calculate control commands (also referred to as a control scheme) that physically moves the robot along the route.

 

How does robot planning work?

Just like a sat nav, route planning involves an algorithm that calculates a route. To do that, the algorithm needs to know a few things:

  • Where the robot is (which it usually gets from the robot’s localisation stack).
  • Where the goal is (usually pre-programmed in).
  • Where obstacles are in the environment.

Data on obstacles is usually delivered via a map file created before the robot sets off. Some robots can build a map as they go, though, using an algorithm like SLAM.

Different planning algorithms prioritise different things when defining the optimal route. Some emphasise the shortest route, others the safest (for instance the route that’s furthest away from all known obstacles). Depending on how advanced your systems are, you may even be able to optimise a route based on energy efficiency.

Whatever mapping method you use, the route the robot plans generally takes the format of a series of waypoints that it gets to. Each waypoint is a desired pose, or state, for your AMR – that means a specific combination of position, orientation, heading, and so on.

 

How to control a robot

Control theory and control system design are very technical topics that rely on an awful lot of equations to explain. To avoid that level of complexity, we’ll do our best to summarise the key points in this blog.

Along the way, we’ll help you visualise what we’re talking about by sharing how our own prototype control system works.

 

Step 1: modelling the robot

Most controllers need a mathematical model of the thing that needs controlling – in our case, the robot. In control nomenclature, this is called the “plant”. The model describes how the inputs from the robot controller (the commands the controller gives the robot) change the robot’s state, or pose. You could draw it out like this:

 

Plant input change of state

 

In the case of our own robot, which is a ClearPath Jackal, the plant has two elements: the Jackal’s motors, and the Jackal ROS2 controller. The Jackal is a differential speed-driven robot platform, which means that its movement is controlled by varying the left and right drive wheel speeds. Equal speeds on each side mean the robot is travelling straight, while varying speeds mean the robot is turning. The ROS2 controller translates the ROS commands from our control stack into those differential speeds. So for our Jackal, the diagram above actually looks like this:

 

OxTS Jackal Plant Control

 

There are general simple mathematical models for common robot types, including drones, car-driven, and differential speed-driven platforms, that can serve as a good starting point for a planning and control model. You can then customise that model to make it specific to your robot and your navigation needs.

 

Step 2: adding the controller

Once you have your model, it’s time to add the next bit: the controller. The controller is the bit that provides the inputs to your robot, and it does that based on a pre-determined desired state. In AMR navigation, that desired state is usually the next waypoint on the path that’s been planned.

If you’ve already dabbled in this area, you may have come across open- and closed-loop controllers. For autonomous mobile robots, a closed-loop controller is far and away the best option.

For the uninitiated, an open-loop controller is one where the output doesn’t affect the input. A simple example is a heater with no thermostat. As long as the heater switch is on, the heater generates heat. It’ll keep going no matter how hot the room gets, because the output (the temperature of the room) has no effect on the input (whether or not the heater is on).

Hopefully, it’s obvious why an open-loop controller isn’t great for controlling an AMR. The controller would tell the robot to move in a way that takes it from its current position to its desired position, but wouldn’t react to anything that happened along the way, including deviations from the correct path or collisions with objects or people.

Instead, AMRs work best with closed-loop controllers. Let’s get into them.

 

Closed-loop controllers

A closed-loop controller adds in a component called a state estimator or observer. The state estimator essentially tells the controller whether there is an error (or difference) between the desired state and the measured state. If there is, the controller adjusts the inputs based on the feedback to compensate.

So if you think of that heater we mentioned above, a thermostat would act as a closed-loop controller. It turns on the heater, and then measures the temperature of the room. Once the difference between the desired temperature and the room temperature (the error) is zero, the heater turns off.

A closed-loop controller looks like this:

 

 

There are lots of varieties of closed-loop controllers, but one of the most common types is proportional, integral, and derivative (PID), which an engineer manually configures for the robot. An AMR using a PID controller will be able to adjust its inputs at a rate proportional to the size of the state error – but thanks to the integral and derivative factors, it adjusts in a way that’s safe for the robot (for instance, by limiting sudden braking or turning that might unbalance the robot) and reduces the chance of overshooting the goal, which then needs complex micro adjustments to fix.

 

Optimal controllers

Many modern control systems (including ours) will also use optimisation algorithms – hence the name. Optimal control theory is a large branch of mathematics that has applications in topics as diverse as robotics, astronautics, and economics – but for our purposes, it’s enough to say that optimisation algorithms also allow the controller to find inputs that will achieve a specific goal.

That goal could be something simple, like “find the control scheme that minimises the state error on the way to the desired state” – essentially automating the manual adjustments an engineer might make to a PID controller. Or, they can be more complex: “find the control scheme that minimises the state error AND minimises the energy used to get to the desired state.”

A notable example of an optimal controller is a linear quadratic regulator (LQR) controller. LQR controllers use a mix of state feedback and knowledge of how the robot will move (from the model) to predict the robot’s future state and optimise the inputs based on that prediction. Although popular, LQR and other controllers using predictive models do require compute power to run their algorithms.

 

The OxTS prototype

We’ve already shown you some of how our planning and control setup works. Here’s the full diagram:

 

 

As you can see, we’ve got an linear quadratic regulator (LQR) controller in the system, which is doing all its optimising work in order to pass forward and steering velocities to the robot as ROS2 commands. As we’ve said, the Jackal then translates those commends into the differential speed commands the motors need in order to move the robot around.

The secret sauce here is our aiding device – an inertial navigation system, or INS.

OxTS INS devices in particular are very good at providing accurate pose estimations, covering everything from global position to heading to pitch and roll. On top of this, the OxTS GAD Interface allows us to plug in additional sensors, including the AMR’s perception sensors, to provide a more robust state estimation in indoor spaces and other areas where GNSS signal is limited.

Our INS also includes an extended Kalman Filter which allows us to filter out erroneous measurements from the INS and its aiding sensors, improving its reliability.

That data, as we’ve said, is then fed back into the LQR controller – which then updates its inputs to help the Jackal get where it needs to go.

During the testing phase (using a computer simulated robot), we evaluated both PID and LQR controllers. Because PID controllers are single input/output, we had to use two controllers: one for forward velocities, and one for steering velocities. Because of this, the PID setup was tricky to tune and so make work reliably. The LQR controller, by contrast, is multi-input and multi-output, so only one controller was needed for both steering velocities. That also meant that the controller could work with a full model of the jackal’s dynamics when running its optimising algorithms. As a result, it was much easier to get to a repeatable level of performance in the simulation – and that performance was repeated with the live robot, too.

 

Autonomous Robot Navigation Solution Brief

AMRs need a robust robot localisation solution; a tool that not only records the position and orientation of the robot, but also operates both indoors and outdoors.

This solution brief steps through the aspects we recommend our customers consider when deciding on their source of localisation for their autonomous mobile robots.

Read the solution brief to learn how the right robot localisation solution can help your AMR project, including the key questions you need to ask yourself before embarking on a project.

AMR Solution Brief

We hope you enjoyed this blog and it’s helped you if you’re just starting out on your AMR journey.

If you’d like to learn more about what we can currently do for AMR engineers, view our application page.

Alternatively, if you’ve got a specific project that you’d like to talk to us about, contact us using the form below. We’re always excited to discuss the latest and greatest robotics projects.

Keep an eye out for the next blog in our series: an introduction to decision making and safety.



return to top

Return to top

,