Aerial Refuelling (Wikipedia)Aerial Refuelling (Wikipedia)
Air-to-air refuelling - when things go wrong (YouTube)Air-to-air refuelling - when things go wrong (YouTube)
Watch on YouTube
Robotic Arms (Wikipedia)Robotic Arms (Wikipedia)
Closed-loop Transfer Function (Wikipedia)Closed-loop Transfer Function (Wikipedia)
Latency (Wikipedia)Latency (Wikipedia)
Open-Loop System (Wikipedia)Open-Loop System (Wikipedia)
About this video
Teaching a computer to refuel an aeroplane.
For a pilot, air-to-air refuelling is one of the most difficult and dangerous procedures to perform. Automating the process would not only eliminate the risk of human error but would also cut down the number of pilot hours required for training.
Jonathan du-Bois, an engineer at Bristol University is currently developing these automated systems using a combination of sensors and control algorithms, not to mention a couple of very large robots. The problem isn’t simple and with automated systems comes issues of feedback delay and latency - things which add new layers of complexity.
Before Jon can begin though, he needs to hack into the robots' control system...
This work is funded by Cobham Mission Equipment as part of the ASTRAEA Programme, which seeks to enable the routine use of UAS (Unmanned Aircraft Systems) in all classes of airspace without the need for restrictive or specialised conditions of operation. The ASTRAEA programme is co-funded by AOS, BAE Systems, Cobham, EADS Cassidian, QinetiQ, Rolls-Royce, Thales, the Technology Strategy Board, the Welsh Assembly Government and Scottish Enterprise.
- The Royal Academy of Engineering, Cobham, ASTRAEA
- Dr Jonathan du Bois
- Bristol, UK
- Collections with this video:
Flying an aeroplane is difficult enough. And one of the most difficult operations a pilot can train for is air-to-air refuelling. But what we're trying to do is teach a computer to do it.
Air-to-air refuelling is essentially formation flying at close quarters. The receiver sidles up to the tanker. And he tries to get his probe into the basket, into the basket.
It takes a lot of skill to do this. And pilots train on a weekly basis, just to keep their skills up. And when it goes wrong, boy, does it go wrong. [CRASHING]
To automate air-to-air refuelling, there are two key technologies that need to be developed. The first is sensors to detect the position of the end of the hose. The second is control algorithms, to guide the receiver aircraft to that position. And this is the kit we're going to be using to do that.
These are standard robotic arms, the sort of thing you'd find in a car production line. But they've got 6 degrees of freedom, which means you can manipulate something in three dimensional space, in any position. So you've got three degrees of freedom like this and then any orientation, by rotating about different axes.
Now, this is accomplished using six joints. The first joint is at the base here, which allows the whole robot to swivel around there. The second joint is here. The third joint is up at the elbow.
The fourth joint is about the axis of the forearm here. The fifth joint is the wrist, rotating here. And then, the sixth joint is around the end effector here.
Now, there's another joint, another axis, which is the track. And that gives us the room to move backwards and forwards, extends the range of the robots. And when they want to, they can shift.
So this is the refueling probe. It's mounted rigidly to the receiver aircraft. The pilot has to manoeuvre this probe into the reception coupling on the drove assembly. And that's when the magic happens.
So the basket hangs off the end of a hose which is trailing from the tanker aircraft. This canopy helps to stabilise it behind the aircraft. But it's still very difficult for a pilot to manoeuvre the nozzle into the reception coupling.
Now it'd be easy enough to get the probe into the basket, using just the robots alone. But what we're trying to do is to simulate the environment which the receiver aircraft flies in. So we model the flight dynamics of the two aircraft.
We model the turbulence. We model the wake of the tanker. and we model the bow wave on the receiver aircraft. And these simulations all run in this box here.
Those run in real time. And they pass the position demands to the robot controller. So there are two computers we need to look at in the controller.
This is the main computer. And this is used to calculate the motion trajectories for the robots. This computer here is the axis computer, which is in charge of the motors.
With the help of the Lund University in Sweden, we're intercepting the messages between these two computers, which gives us direct access to the motor controller. Effectively, we've hacked the robot controller.
The robots allow us to manipulate the probe and the basket. So they follow the motions from a real-time simulation, which runs in parallel. This simulation allows us to develop the control algorithms for the real aircraft. And underpinning all of these things is the idea of feedback control systems.
So first, we're going to take a look at open-loop control systems, without any feedback. This is effectively a remote-control helicopter. And we're just going to look at the elevation control. So what we're controlling is the fan speed.
Now, it's inherently a stable systems. So I can let it go like this. And it'll stay put. But if I give it a little nudge, then it'll start oscillating. And it'll take awhile to settle down to that stable position again.
Now, we're going to go ahead and change that. We're now going to close the loop on that control system. And using the feedback, we get a much more stable position. Now, if I nudge it, the fans change speed to compensate. So that controller allows it to stay in the same position.
Now, what we're going to try doing is introducing a delay to this system. So we're putting a delay into that loop. And what that means is that by the time the system is aware of where it is, it's already gone past the position it wants to be. And so it doesn't have time to compensate.
So what you'll see is that, each time it goes past the stable position, the fans change to speed too late. And it diverges from the stable position. So that's why delay is a bad thing in a control system and why we're trying to get rid of it in robots.
Let's have a look at what we just saw. We had an input going into a controller. The controller determined the fan speed for the helicopter. And then, the outputs from that system were the position and velocity.
So that's a simple, open-loop system. What we then did was we closed the loop, by measuring the position of sensors and feeding that back into the controller.
What you saw was the effect. If we had a delay in the sensors, the aircraft became unstable. Now, this is very similar to the system we're looking at behind us.
We can split this into two halves. The bottom half is the real world. So we've got the physical sensors that we're hoping to evaluate.
And the top half is completely simulated. And on the boundary between these two, you have the robots. Now, the robots translate the positions from the simulation into real physical positions that the sensors can then use to detect.
Now, by bringing the robots in, we now have two sources of delay. So there's delays in the sensors. And there's delays in the robots.
We're trying to evaluate the delays in the sensors and how they affect our control algorithms. So we need to eliminate the delays from the robots, to make sure they don't add in to the effect.
Now that we've ironed out the creases in the test rig, the real work can begin.
Collections containing this video:
Putting engineers on film and filming engineering.