In this post I’ll be giving some intuition for how a basic control system for a drone in a 2D world could work.
This is purely for fun and explanations are aimed at providing some intuition for the problem space rather than getting too far into implementation (I’ll get there later!).
Let’s say we have a quadcopter and we’d like to write some software that guides it towards a target location. How do we go about doing that? Why is this even a difficult problem?
We’ll start with a simple model for how an ideal drone might work ignoring the altitude component for now. You can control it with your arrow keys below (make sure you click on the canvas first).
As you get a feel for flying our little quadcopter, pay attention to what variable you’re actually controlling (is it velocity or acceleration?)
You aren’t directly controlling velocity, otherwise you’d be able to just let go of an arrow key and immediately get the vehicle to stop.
Instead you’re actually controlling the acceleration of the quadcopter, so if you build up too much speed, then want to come to a stop, you have to accelerate opposite to your velocity.
How exactly could we make this simulated quadcopter fly towards an arbitrary XY position?
We can take the thermostat approach! What does your lovely thermostat do all day? Well, it works really hard to try to keep a temperature reading in a small range of comfort.
When the temperature is too high by some amount, it turns on cooling until the temperature is back to optimal.
If the temperature is too low, it kicks on heating until back to optimal.
In our situation we’ll take this to mean always accelerating towards our target full blast.
Lets see how this type of control system performs below (you can click or tap to move the target location).
You’ll notice that this controller does get us into position eventually, but once its on the target the acceleration vector will be jumping wildly in opposite directions.
Now this might work in theory, but you can imagine hard and sudden changes in acceleration commands could put a bunch of mechanical stress on your drone (and generally look a bit scary).
Stop Breaking Things!
So how to we go about solving this problem?
One idea is that small amounts of position error should really only cause an acceleration in proportion to the amount of error. So if we’re far away, we can punch it to get closer, and if we’re already close, we can just use a little bit of acceleration to get there.
This is called a proportional controller. You can see the acceleration vector just getting large when the drone is far away from the target, and staying pretty small otherwise.
This actually works pretty well! Sure, we overshoot quite a bit at the beginning, but things eventually settle down and when we’re really close to the target we aren’t oscillating.
Speed is Key
So at this point we’re more or less controlling our drone, and we aren’t frying motors or scaring anyone, so what is there to work on next?
Speed! Specifically settling time. We want our drone on its target as quickly as possible with minimal overshoot.
How do we go faster? Well if we think about it, it’s silly to only start slowing down after we pass the target. We really need a way to teach the drone how to slow down preemptively.
How do we know if we’re going too fast?
One good way is to look at the change in our error in the last timestep.
If our error is smaller than last time, we know we are approaching the target, and if the error is larger than last time, we know we are moving away from the target.
So if the error last timestep minus the error this timestep is very large, that tells us we are approaching the target too quickly and need to slow down.
We can implement this by taking the proportional control value we calculated above, and just subtracting this change in error. We can have separate (tunable) constants for the proportional part and this error difference to adjust the controllers performance.
Whoa, that’s pretty good! We confidently accelerate up to our max speed, cruise for a bit, then slow down before hitting our target, perfect!
Before we move on, we can come up with a better name for this error difference we talked about. It’s actually just a numerical derivative! Also, as a side note, since we’re taking the derivative of our error, which happens to be a relative position, this derivative term is really calculating our velocity for us, and preventing that velocity from getting too high!
We’re about to declare victory when all of the sudden you start noticing the drone drifting away from the target.
Not by a ton, its not unstable, but there’s definitely a noticeable gap that isn’t going away. What’s up?
Let’s think about the impact of a strong constant wind on our controller to the south-east.
You can imagine your proportional controller will completely offset the wind once the drone is off the target by a large enough margin, but there’s nothing in our control algorithm to offset the wind to get us totally on target.
We know our drone is powerful enough to handle the wind and stay on target, we just need something monitoring the wind.
One way of doing this is to keep track of historical error by keeping a sum of all errors in the past, and just adding it into our control formula (with its own tunable constant).
This error sum should grow larger and larger until it influences the controller enough to push the drone onto the target location.
We can show this down below, you can enable or disable this error accumulator using the below button, and randomize the wind direction to see how adding this gain has affected our ability to control.
So obviously accumulating error (or integrating error) and passing that back through our error loop gets rid of that bias issue we were seeing on just the proportional and derivative controller. Cool!
You may have also noticed that we made our overshoot problem worse.
Let’s say the drone is on the far left side of the world, and the target is on the far right side of the world. The drone will accelerate up to max acceleration just based on the proportional feedback, and the drone will get to the target as fast as it can (I added a velocity limit in the simulation world), but the whole time the vehicle is traveling, that error sum is just getting larger and larger.
Once we finally get close to the target, our error sum is still going to be very large, and push us past the target position. This effect lasts until the error sum balances back out to a value closer to 0.
What we really wanted is just for integrator to kick in when proportional control wasn’t doing enough to solve the control problem, not when the proportional controller already has things under control.
So one naive way we can solve this integral windup problem is just by clearing the integrator sum when the controller output hits maximum.
This means we forget about what our integrator has learned during big transitions and let it build back up once we’re close to the target.
You can enable or disable windup prevention with the top right button!
What we just laid out is a classic control technique called PID control. P stands for proportional, I stands for integral, and D stands for derivative.
This controller, if tuned well, (and possibly with some domain specific tweaks) can handle just about any typical control problem well enough, and serves as the basis for aircraft control, cruise control, and a slew of robots and self-driving car applications.
Hopefully you gained a bit of intuition about PID!