Robotic Manipulation

Perception, Planning, and Control

Russ Tedrake

© Russ Tedrake, 2020-2022
Last modified .
How to cite these notes, use annotations, and give feedback.

Note: These are working notes used for a course being taught at MIT. They will be updated throughout the Fall 2022 semester.

Previous Chapter Table of contents Next Chapter

Force Control

Over the last few chapters, we've developed a great initial toolkit for perceiving objects (or piles of objects) in a scene, planning grasps, and moving the arm to grasp. But there is a lot more to manipulation than grasping!

In fact, the signs have already been there if you were paying close attention. I'm sure that you noticed image number 9954 in my clutter segmentation dataset. Good old 09954.png. Just in case you happened to miss it, here it is again.

I don't know how many of the other 9999 random scenes that we generated are going to have this problem, but 9954 makes it nice and clear. How in the world are we going to pick up that "Cheez-it" cracker box with our two-fingered gripper? (I know there are a few of you out there thinking about how easy that scene is for your suction gripper, but suction alone isn't going to solve all of our problems either.)

Click here to watch the video.

Just for fun, I asked my daughter to try a similar task with the "two-fingered gripper" I scrounged from my kitchen. How are we to program something like that!

The term nonprehensile manipulation means "manipulation without grasping", and humans do it often. It is easy to appreciate this fact when we think about pushing an object that is too big to grasp (e.g. an office chair). But if you pay close attention, you will see that humans make use of strategies like sliding and environmental contact often, even when they are grasping. These strategies just come to the forefront in non-grasping manipulation.

As we build up our toolkit for prehensile and nonprehensile manipulation, one of the things that is missing from our discussion so far, which has been predominantly kinematic, has been thinking about forces. This chapter aims to begin that discussion. By the end, I hope you'll agree that we have a pretty satisfying way to flip up that box!

A simple model

As always in our battle against complexity, I want to find a setup that is as simple as possible (but no simpler)! Here is my proposal for the box-flipping example. First, I will restrict all motion to a 2D plane; this allows me to chop off two sides of the bin (for you to easily see inside), but also drops the number of degrees of freedom we have to consider. In particular, we can avoid the quaternion floating base coordinates, by adding a PlanarJoint to the box. Instead of using a complete gripper, let's start even simpler and just use a "point finger". I've visualized it as a small sphere, and modeled two control inputs as providing force directly on the $x$ and $z$ coordinates of the point mass.

The simple model: a point finger, a cracker box, and the bin all in 2D. The green arrows are the contact forces.

Even in this simple model, and throughout the discussions in this chapter, we will have two dynamic models that we care about. The first is the full dynamic model used for simulation: it contains the finger, the box, and the bin, and has a total of 5 degrees of freedom ($x, z, \theta$ for the box and $x, z$ for the finger). The second is the robot model used for designing the controllers: this model has just the two degrees of freedom of the finger, and experiences unmodelled contact forces. By design, the equations of this second model are particularly simple: $$\begin{bmatrix}m & 0 \\ 0 & m \end{bmatrix} \dot{v} = m \begin{bmatrix}\ddot{x} \\ \ddot{z} \end{bmatrix} = mg + \begin{bmatrix} u_x \\ u_z \end{bmatrix} + f^c,$$ where $m$ is the mass, $g = [0, -9.81]^T$ is the gravity vector, $u$ is the control input vector, and $f^c$ is the contact force (from the world to the finger). Notably absent is the contact Jacobian which normally pre-multiplies the contact forces, $\sum_i J_i^T f^{c_i};$ in the case of a point finger this Jacobian is always the identity matrix, and there is only one point at which contact can be made, so there is no meaningful reason to distinguish between multiple forces.

Add Jiaji's video? Check yourself: can the finger move the box? (only if the coefficient of friction is bigger).

(Direct) force control

Almost always, we implement a low-level controller that converts some more intuitive command interface down into the generalized force inputs on the robot. In the case of direct force control, the abstraction that we want is to be able to directly "control" the contact forces. (In the general case, we must specify the contact location to provide this interface; again the point finger allows us to ignore that detail for the moment).

What information do we need to regulate the contact forces? Certainly we need the desired contact force, $f^{c_d}$. In general, we will need the robot's state (though in the immediate example, the dynamics of our point mass are not state dependent). But we also need to either (1) measure the robot accelerations (which we've repeatedly tried to avoid), (2) assume the robot accelerations are zero, or (3) provide a measurement of the contact force so that we can regulate it with feedback.

Let's consider the case where the robot is already in contact with the box. Let's also assume for a moment that the accelerations are (nearly) zero. This is actually not a horrible assumption for most manipulation tasks, where the movements are relatively slow. In this case, our equations of motion reduce to $$f^c = - mg - u.$$ Our force controller implementation can be as simple as $u = -mg - f^{c_d}.$ Note that we only need to know the mass of the robot to implement this controller.

What happens when the robot is not in contact? In this case, we cannot reasonably ignore the accelerations, and applying the same control results in $m\dot{v} = - f^{c_d}.$ That's not all bad. In fact, it's one of the defining features of force control that makes it very appealing. When you specify a desired force and don't get it, the result is accelerating the contact point in the (opposite) direction of the desired force. In practice, this (locally) tends to bring you into contact when you are not in contact.

Commanding a constant force

Let's see what happens when we run a full simulation which includes not only the non-contact case and the contact case, but also the transition between the two (which involves collision dynamics). I'll start the point finger next to the box, and apply a constant force command requesting to get a horizontal contact force from the box. I've drawn the $x$ trajectory of the finger for different (constant) contact force commands.

This is a plot of the horizontal position of the finger as a function of time, given different constant desired force commands.

For all strictly positive desired force commands, the finger will accelerate at a constant rate until it collides with the box (at $x=0.089$). For small $f^{c_d}$, the box barely moves. For an intermediate range of $f^{c_d}$, the collision with the box is enough to start it moving, but friction eventually brings the box and therefore also the finger to rest. For large values, the finger will keep pushing the box until it runs into the far wall.

Consider the consequences of this behavior. By commanding force, we can write a controller that will come into a nice contact with the box with essentially no information about the geometry of the box (we just need enough perception to start our finger in a location for which a straight-line approach will reach the box).

This is one of the reasons that researchers working on legged robots also like force control. On a force-capable walking robot, we might mimic position control during the "swing phase", to get our foot approximately where we are hoping to step. But then we switch to a force control mode to actually set the foot down on the ground. This can significantly reduce the requirements for accurate sensing of the terrain.

A force-based "flip-up" strategy

Here is my proposal for a simple strategy for flipping up the box. Once in contact, we will use the contact force from the finger to apply a moment around the bottom left corner of the box to generate the rotation. But we'll add constraints to the forces that we apply so that the bottom corner does not slip.

Flipping the box. Click here for an animation of the controller we'll write in the example below. But it's worth running the code, where you can see the contact force visualization, too!

These conditions are very natural to express in terms of forces. And once again, we can implement this strategy with very little information about the box. The position of the finger will evolve naturally as we apply the contact forces. It's harder for me to imagine how to write an equally robust strategy using a (high-gain) position-controller finger; it would likely require many more assumptions about the geometry (and possibly the mass) of the box.

A force-based flip-up strategy

Flip theta. Right now tau_W_y makes theta more negative.

Let's encode the textual description above. I'll use $C$ for the contact frame between the finger and the box, and $A$ for the contact frame for the lower left corner of the box contacting the bin. The friction cone provides (linear inequality) constraints on the forces we want to apply. Within those constraints, we would like to rotate up the box. Constrained least-squares is as natural a formulation for this as any! Here is one version: \begin{align*}\min_{f^C_W} \quad& \left| w f^{C}_{C_x} - \text{PID}(\theta_d, \theta) \right|^2,\\ \subjto \quad & |f^C_{W_x}| \le \hat\mu_A (\hat{m}g - f^C_{W_z}), \\ & f^C_{C_z} \ge 0, \qquad |f^C_{C_x}| \le \hat\mu_C f^C_{C_z}, \\ \text{with} \quad & {}^WR^C(\theta) = \begin{bmatrix} -\sin\theta & -\cos\theta \\ \cos\theta & -\sin\theta \end{bmatrix}.\end{align*} I've used $\text{PID}$ here as shorthand for a simple proportional-integral-derivative term. Implementing this strategy assumes:

  • We have some approximation, $\hat\theta$, for the orientation of the box. We could obtain this from a point cloud normal estimation, or even from tracking the path of the fingers.
  • We have conservative estimates of the coefficients of static friction between the wall and the box, $\hat\mu_A$, and between the finger and the box, $\hat\mu_C$, as well as the box mass, $\hat{m}.$ (You should experiment and decide for yourself whether these estimates can be loose).
  • We assume that $\theta_d(t)$ changes slowly (in particular, that the box accelerations are small), so that we can mostly ignore the unknown inertial terms from the box .
Apart from the finger making some contact, there are no other assumptions about the shape of the box!

To understand this controller, let's break down the dynamics of the task.

Once the box has rotated up off the bin bottom, the only forces on the box are the gravitational force, the force that the bin applies on the box corner, call it $f^A$, and the contact force, $f^C$, that the finger applies on the box. By convention, the contact frames $A$ and $C$ have the $z$ axis aligned with the contact normal (pointing into the box). Remember the contact geometry we use for the cracker box has small spheres in the corners; so both of these contacts are "sphere on box", with the box normals defining the contact normals (e.g. the $x$ axis is aligned with the wall of the box), and ${}^WR^A = I$. The friction cone constraints are: \begin{gather*}f^A_{W_z} \ge 0, \qquad |f^A_{W_x}| \le \mu_A f^A_{W_z}, \\ f^C_{C_z} \ge 0, \qquad |f^C_{C_x}| \le \mu_C f^C_{C_z}.\end{gather*} The orientation of frame $C$ is a function of the box orientation, ${}^WR^C(\theta)$, and is written above.

Add diagrams for the frames

The dynamics of the box can be written as \begin{gather*} m\ddot{x} = f^A_{W_x} + f^C_{W_x} \\ m \ddot{z} = f^A_{W_z} + f^C_{W_z} - mg \\ I\ddot\theta = ^Ap^{CM}_W \times \begin{bmatrix} 0 \\ -mg\end{bmatrix} + {}^Ap^C_W \times f^C_W, \end{gather*} where $I$ is the moment of inertia of the box taken around the top corner, and $p^{CM}$ is the position of the center of mass. We don't want our controller to depend on most of these parameters, but it helps to understand them! If we move slowly (the velocities and accelerations are nearly zero), then we can write $$f^A_{W_x} = -f^C_{W_x}, \qquad f^A_{W_z} = mg - f^C_{W_z}.$$ Taken together, the friction cone constraints on the bin imply the following constraints on our finger contact: $$|f^C_{W_x}| \le \mu_A (mg - f^C_{W_z}).$$ Note that the full dynamic derivation is not too much harder; I did it on paper by writing ${}^{CM}p^A = [-\frac{h}{2}, -\frac{w}{2}]^T,$ with the height, $h$, and width, $w$ of the box. Then one can write the no-slip at the bin constraint as $\phi(q) = {}^{CM}p^A_{W_x} = \text{constant}$ which implies that $\ddot{\phi} = 0,$ which gives the full expression for $f^A_{W_x}$ (as long as we're in the friction cone).

Now here's a nice approximation. By choosing to make contact near the bottom of the box (since we know where the bin is), we can approximate $${}^A\tau^{Box}_{W_y} = {}^Ap^{C}_W \times f^C_W \approx -{}^Ap^C_{C_z} f^C_{C_x} = -w f^C_{C_x}.$$ In words, the moment produce by the finger contact is approximately the tangential force on the box (times the width of the box). I therefore claim that choosing the desired torque to be equal to $$w f^{C_d}_{C_x} \approx \text{PID}(\theta_d, \theta),$$ is a good strategy for regulating the orientation of the box. It does not assume we know the box geometry nor inertial properties.

That's a lot of details, but all to justify a very simple and robust controller.

Generate random boxes.

We have multiple controllers in this example. The first is the low-level force controller that takes a desired contact force and sends commands to the robot to attempt to regulate this force. The second is the higher-level controller that is looking at the orientation of the box and deciding which forces to request from the low-level controller.

Please also understand that this is not some unique or optimal strategy for box flipping. I'm simply trying to demonstrate that sometimes controllers which might be difficult to express otherwise can easily be expressed in terms of forces!

Indirect force control

There is a nice philosophical alternative to controlling the contact interactions by specifying the forces directly. Instead, we can program our robot to act like a (simple) mechanical system that reacts to contact forces in a desired way. This philosophy was described nicely in an important series of papers by Ken Salisbury introducing stiffness control Salisbury80 and then Neville Hogan introducing impedance control Hogan85a+Hogan85b+Hogan85c.

This approach is conceptually very nice. With only knowledge of the parameters of the robot itself (not the environment), we can write a controller so that when we push on the end-effector, the end-effector pushes back (using the entire robot) as if you were pushing on, for instance, a simple spring-mass-damper system. Rather than attempting to achieve manipulation by moving the end-effector rigidly through a series of position commands, we can move the set points (and perhaps stiffness) of a soft virtual spring, and allow this virtual spring to generate our desired contact forces.

This approach can also have nice advantages for guaranteeing that your robot won't go unstable even in the face of unmodeled contact interactions. If the robot acts like a dissipative system and the environment is a dissipative system, then the entire system will be stable. Arguments of this form can ensure stability for even very complex system, building on the rich literature on passivity theory or more generally Port-Hamiltonian systemsDuindam09.

Our simple model with a point finger is ideal for understanding the essence of indirect force control. The original equations of motion of our system are $$m\begin{bmatrix}\ddot{x} \\ \ddot{z} \end{bmatrix} = mg + u + f^c.$$ We can write a simple controller to make this system act, instead, like (passive) mass-spring-damper system: $$m \begin{bmatrix}\ddot{x} \\ \ddot{z} \end{bmatrix} + b \begin{bmatrix} \dot{x} \\ \dot{z} \end{bmatrix} + k \begin{bmatrix} x - x_d \\ z - z_d \end{bmatrix} = f^c,$$ with the rest position of the spring at $(x_d, z_d).$ The controller that implements this follows easily; in the point finger case this has the familiar form of a proportional-derivative (PD) controller, but with an additional "feed-forward" term to cancel out gravity.

Technically, if we are just programming the stiffness and damping, as I've written here, then a controller of this form would commonly be referred to as "stiffness control", which is a subset of impedance control. We could also change the effective mass of the system; this would be impedance control in its full glory. My impression, though, is that changing the effective mass is most often considered not worth the complexity that comes from the extra sensing and bandwidth requirements.

The literature on indirect force control has a lot of terminology and implementation details that are important to get right in practice. Your exact implementation will depend on, for instance, whether you have access to a force sensor and whether you can command forces/torque directly. The performance can vary significantly based on the bandwidth of your controller and the quality of your sensors. See e.g. Villani08 for a more thorough survey (and also some fun older videos), or Whitney87 for a nice earlier perspective.

Teleop with stiffness control

I didn't give you a teleop interface with direct force control; it would have been very difficult to use! Moving the robot by positioning the set points on virtual springs, however, is quite natural. Take a minute to try moving the box around, or even flipping it up.

To help your intuition, I've made the bin and the box slightly transparent, and added a visualization (in orange) of the virtual finger or gripper that you are moving with the sliders.

A stiffness-control-based "flip-up" strategy

Let's embrace indirect force control to come up with another approach to flipping up the box. Flipping the box up in the middle of the bin required very explicit reasoning about forces in order to stay inside the friction cone in the bottom left corner of the box. But there is another strategy that doesn't require as precise control of the forces. Let's push the box into the corner, and then flip it up.

To make this one happen, I'd like to imagine creating a virtual spring -- you can think of it like a taut rubber band -- that we attach from the finger to a point near the wall just a little above the top left corner of the box. The box will act like a pendulum, rotating around the top corner, with the rubber band creating the moment. At some point the top corner will slip, but the very same rubber band will have the finger pushing the box down from the top corner to complete the maneuver.

Consider the alternative of writing an estimator and controller that needed to detect the moment of slip and make a new plan. That is not a road to happiness. By only using our model of the robot to make the robot act like a different dynamical system at the point we can accomplish all of that!

A stiffness-control-based flip-up strategy

This controller almost couldn't be simpler. I will just command a trajectory the move the virtual finger to just in front of the wall. This will push the box into contact and establish our bracing contact force. Then I'll move the virtual finger (the other end of our rubber band) up the wall a bit, and we can let mechanics take care of the rest!

Hybrid position/force control

There are a number of applications where we would like to explicitly command force in one direction, but command positions in another. One classic examples if you are trying to wipe or polish a surface -- you might care about the amount of force you are applying normal to the surface, but use position feedback to follow a trajectory in the directions tangent to the surface. In the simplest case, imagine controlling force in $z$ and position in $x$ for our point finger: $$u = -mg + \begin{bmatrix} k_p (x_d - x) + k_d (\dot{x}_d - \dot{x}) \\ -f^{C_d}_{W_z} \end{bmatrix}.$$ If want the forces/positions in a different frame, $C$, then we can use $$u = -mg + {}^WR^{C} \begin{bmatrix} k_p (p^{C_d}_{C_x} - p^{C}_{C_x}) + k_d (v^{C_d}_{C_x} - v^{C}_{C_x}) \\ -f^{C_d}_{C_z} \end{bmatrix}.$$ By commanding the normal force, you not only have the benefit of controlling how hard the robot is pushing on the surface, but also gain some robustness to errors in estimating the surface normal. If a position-controlled robot estimated the normal of a wall badly, then it might leave the wall entirely in one direction, and push extremely hard in the other. Having a commanded force in the normal direction would allow the position of the robot in that direction to become whatever is necessary to apply the force, and it will follow the wall.

The choice of position control or force control need not be a binary decision. We can simply apply both the position command (as in stiffness/impedance control) and a "feed-forward" force command: $$u = -mg + {}^WR^C \left[K_p (p^{C_d}_C - p^{C}_C) + K_d (v^{C_d}_C - v^{C}_C) + f^{ff}_C \right].$$ As we'll see, this is quite similar to the interface provided by the iiwa (and many other torque-controlled robots).

Peg in hole

One of the classic problems for manipulator force control, inspired by robotic assembly, is the problem of mating two parts together under tight kinematic tolerances. Perhaps the cleanest and most famous version of this is the problem of inserting a (round) peg into a (round) hole. If your peg is square and your hole is round, I can't help. Interestingly, much of the literature on this topic in the late 1970's and early 1980's came out of MIT and the Draper Laboratory.

(Left) the "peg in hole" insertion problem with kinematic tolerance $c$. (Middle) the desired response to a contact force. (Right) the desired response to a contact moment. Figures reproduced (with some symbols removed) from Drake78.

For many peg insertion tasks, it's acceptable to add a small chamfer to the part and/or the top of the hole. This allows small errors in horizontal alignment to be accounted for with a simple compliance in the horizontal direction. Misalignment in orientation is even more interesting. Small misalignments can cause the object to become "jammed", at which point the frictional forces become large and ambiguous (under the Coulomb model); we really want to avoid this. Again, we can avoid jamming by allowing a rotational compliance at the contact. I think you can see why this is a great example for force control!

The key insight here is that the rotational stiffness that we want should result in rotations around the part/contact, not around the gripper. We can program this with stiffness control; but for very tight tolerances the sensing and bandwidth requirements become a real limitation. One of the coolest results in force control is the realization that a clever physical mechanism can be used to produce a "remote-centered compliance" (RCC) completely mechanically, with infinite bandwidth and no sensing required Drake78!

A passive remote-center compliance device. (Left) concept schematic from Drake78. (Right) schematic of a mechanical implementation; image credit for Arne Nordmann, 2008. See here for some example products, and here for a nice (older) video.

Interestingly, the peg-in-hole problem inspired incredibly important and influential ideas in mechanics and control, but also in motion planning Lozano-Perez84. The general ideas are quite relevant and applicable for a wide variety of tasks for which compliant contact is a reasonable strategy, such as opening doors, tool use, etc.

Manipulator control

Using the floating finger/gripper is a good way to understand the main concepts of force control without the details. But now it's time to actually implement those strategies using joint commands that we can send to the arm.

Our starting point is understanding that the equations of motion for a fully-actuated robotic manipulator have a very structured form: \begin{equation}M(q)\ddot{q} + C(q,\dot{q})\dot{q} = \tau_g(q) + u + \sum_i J^T_i(q)f^{c_i}.\label{eq:manipulator} \end{equation} The left side of the equation is just a generalization of "mass times acceleration", with the mass matrix, $M$, and the Coriolis terms $C$. The right hand side is the sum of the (generalized) forces, with $\tau_g(q)$ capturing the forces due to gravity, $u$ the joint torques supplied by the motors, and $f^{c_i}$ is the Cartesian force due to the $i$th contact, where $J_i(q)$ is the $i$th "contact Jacobian". I introduced a version of these briefly when we described multibody dynamics for dropping random objects into the bin, and have more notes available here. For the purposes of the remainder of this chapter, we can assume that the robot is bolted to the table, so does not have any floating joints; I've therefore used $\dot{q}$ and $\ddot{q}$ instead of $v$ and $\dot{v}$.

Joint stiffness control

In practice, the way that we most often interface with the iiwa is through the its "joint-impedance control" mode, which is written up nicely in Ott08+Albu-Schaffer07. For our purposes, we can view this as a stiffness control in joint space: $$u = -\tau_g(q) + K_p(q_d - q) + K_d(\dot{q}_d - \dot{q}) + \tau_{ff},$$ where $K_p, K_d$ are positive diagonal matrices, and $\tau_{ff}$ is a "feed-forward" torque. In practice the controller also includes some joint friction compensationAlbu-Schaffer01, but I've left those friction terms out here for the sake of brevity. The controller does also do some shaping of the inertias (earning it the label "impedance control" instead of only "stiffness control"), but only the rotor inertias, and the user does not set these effective inertias.

Check yourself: What is the difference between traditional position control with a PD controller and joint-stiffness control?

The difference in the algebra is quite small. A PD control would typically have the form $$u=K_p(q_d-q) + K_d(\dot{q}_d - \dot{q}),$$ whereas stiffness control is $$u = -\tau_g(q) + K_p(q_d-q) + K_d(\dot{q}_d - \dot{q}).$$ In other words, stiffness control tries to cancel out the gravity and any other estimated terms, like friction, in the model. As written, this obviously requires an estimated model (which we have for iiwa, but don't believe we have for most arms with large gear-ratio transmissions) and torque control. But this small difference in the algebra can make a profound difference in practice. The controller is no longer depending on error to achieve the joint position in steady state. As such we can turn the gains way down, and in practice have a much more compliant response while still achieving good tracking when there are no external torques.

Gravity comp is a classic demo of how well you model/control your robot. Show kuka videos. C++ implementation. Add code example(s) here.

Cartesian stiffness control

A better analogy for the control we were doing with the point finger example is to write a controller so that the robot acts like a simple dynamical system in the world frame. To do that, we have to identify a frame, $C$ on the robot where we want to impose these simple dynamics -- origin of this frame is the expected point of contact. Following our treatment of kinematics and differential kinematics, we'll define the forward kinematics of this frame as: \begin{equation}p^C = f_{kin}(q), \qquad v^C = \dot{p}^C = J(q) \dot{q}, \qquad a^C = \ddot{p}^C = J(q)\ddot{q} + \dot{J}(q) \dot{q}.\label{eq:kinematics}\end{equation} We haven't actually written the second derivative before, but it follows naturally from the chain rule. Also, I've restricted this to the Cartesian positions; one can think about the orientation of the end-effector, but this requires some care in defining the 3D stiffness in orientation.

One of the big ideas from manipulator control is that we can actually write the dynamics of the robot in the frame $C$, by writing the joint torques in terms of a spatial force command, $F^u$: $u = J^T(q) F^u$. This comes from the principle of virtual work. By substituting this and the manipulator equations (\ref{eq:manipulator}) into (\ref{eq:kinematics}) and assuming that the only external contacts happen at $C$, we can write the dynamics: \begin{equation} M_C(q) \ddot{p}^C + C_C(q,\dot{q})\dot{q} = {}^C F^{g}(q) + F^u + F^{ext} \label{eq:cartesian_dynamics},\end{equation} where $$M_C = (J M^{-1} J^T)^{-1}, \qquad C_C = M_C \left(J M^{-1} C+ \dot{J} \right), \qquad {}^CF^g = M_C J M^{-1} \tau_g.$$ Now we can simply apply the controller we used in joint space, e.g.: $$F^u = -{}^C F^{g}(q) + K_p(p^{C_d} - p^C) + K_d(\dot{p}^{C_d} - \dot{p}^C) + F^{ff}.$$ Please don't forget to add some terms to stabilize the null space of your Jacobian if necessary.

Some implementation details on the iiwa

The implementation of the low-level impedance controllers has many details that are explained nicely in Ott08+Albu-Schaffer07. In particular, the authors go to quite some length to implement the impedance law in the actuator coordinates rather than the joint coordinates (remember that they have an elastic transmission in between the two). I suspect there are many subtle reasons to prefer this, but they go to lengths to demonstrate that they can make a true passivity argument with this controller.

The iiwa interface offers a Cartesian impedance control mode. If we want high performance stiffness control in end-effector coordinates, then we should definitely use it! The iiwa controller runs at a much higher bandwidth (faster update rate) than the interface we have over the provided network API, and many implementation details that they have gone to great lengths to get right. But in practice we do not use it, because we cannot convert between joint impedance control and Cartesian impedance control without shutting down the robot. Sigh. In fact we cannot even change the stiffness gains nor the frame $C$ (aka the "end-effector location") without stopping the robot. So we stay in joint impedance mode and command some Cartesian forces through $\tau_{ff}$ if we desire them. (If you are interested in the driver details, then I would recommend the documentation for the Franka Control Interface which is much easier to find and read, and is very similar to functionality provided by the iiwa driver.)

You might also notice that the interface we provide to the ManipulationStation takes a desired joint position for the robot, but not a desired joint velocity. That is because we cannot actually send a desired joint velocity to the iiwa controller. In practice, we believe that they are numerically differentiating our position commands to obtain the desired velocity, which adds a delay of a few control timesteps (and sometimes non-monotonic behavior). I don't really know why we aren't allowed to send our own joint velocity commands.

Think through and propose a specific velocity filtering/estimation system that could account for this behavior. Can also tell the firmware about the lumped inertia of the gripper. But if you pick something up (or even move the fingers), that will not be included.
Adaptive control. Slotine WAM throwing. Estimating contact location. contact particle filter. haddadin videos (e.g. http://www.diag.uniroma1.it/~deluca/IIT_Seminar_Jan23_2019_ADL.pdf) De Luca pHRI https://www.youtube.com/playlist?list=PLvAUmIzqq6oaRtwX9l9sjDhcNMXNCGSN0

Putting it all together

Exercises

Force and Position Control

Suppose you are given a point-mass system with the following dynamics:

$$m\ddot{x} = u + f^c$$

where $x$ is the position, $u$ the input force applied on the point-mass, and $f^c$ the contact force from the environment. In order to control the position and force from the robot at the same time, the following controller is proposed:

$$u = \underbrace{k_p(x_d-x) - k_d\dot{x}}_{\text{feedback force for position control}} - \underbrace{f^c_d}_{\text{feedforward force}}$$

Define the error for this system to be:

$$e = (x - x_d)^2 + (f^c - f^c_d)^2$$
  1. Let's consider the case where our system is in free space (i.e. $f^c=0$) and we want to exert a non-zero desired force ($f^c_d \neq 0$) and drive our position to zero ($x_d = 0$). You wil show that this is not possible, i.e. we cannot achieve zero error for our desired force and desired position. You can show this by considering the steady-state error (i.e. set $\dot{x}=0,\ddot{x}=0$) as a function of the system dynamics, controller, desired position and desired force.
  2. Now consider the system is in rigid body contact with a wall located at $x_d = 0$ and a desired non-zero force ($f^c_d > 0$) is being commanded. Considering the steady-state error in this case, show that system can achieve zero error.

Box Flip-up

Consider the initial phase of the box flip-up task from the example above, when $\theta\approx 0$. We will assume that the forces can still be approximated as axis-aligned, but the ground contact is only being applied at the pivoting point $A$.

  1. If we do not push hard enough (i.e. $f^C_{C_z}$ is too small), then we will not be able to create enough friction ($f^C_{C_x}$) to counteract gravity. By summing the torques around the pivot point $A$, derive a lower bound for the required normal force $f^C_{C_z}$, as a function of $m,g,$ and $\mu^C$. You may assume that the moment arm of gravity is half that of the moment arm on C, and that the box is square.
  2. If we push too hard (i.e. $f^C_{C_z}$ is too big), then the pivot point will end up sliding to the left. By writing the force balance along the horizontal axis, derive an upper bound for the required normal force $f^C_{C_z}$, as a function of $m,g,$ and $\mu^A$. (HINT: It will be necessary to use the torque balance equation from before)
  3. By combining the lower bounds and upper bounds from the previous problems, derive a relation for $\mu^A,\mu^C$ that must hold in order for this motion to be possible. If $\mu^A\approx 0.25$ (wood-metal), how high does $\mu^C$ have to be in order for the motion to be feasible? What if $\mu^A=1$ (concrete-rubber)?

Hybrid Force-Position Control

For this exercise, you will analyze and implement a hybrid force-position controller to drag a book, as we saw from this video during lecture. You will work exclusively in . You will be asked to complete the following steps:

  1. Analyze the conditions for this motion to be feasible.
  2. Implement a control law to implement the motion

References

  1. K. Salisbury, "Active stiffness control of a manipulator in {C}artesian coordinates", Proc. of the 19th IEEE Conference on Decision and Control , 1980.

  2. Neville Hogan, "Impedance Control: An Approach to Manipulation. Part I - Theory", Journal of Dynamic Systems, Measurement and Control, vol. 107, pp. 1--7, Mar, 1985.

  3. Neville Hogan, "Impedance Control: An Approach to Manipulation. Part II - Implementation", Journal of Dynamic Systems, Measurement and Control, vol. 107, pp. 8--16, Mar, 1985.

  4. Neville Hogan, "Impedance Control: An Approach to Manipulation. Part III - Applications", Journal of Dynamic Systems, Measurement and Control, vol. 107, pp. 17--24, Mar, 1985.

  5. "Modeling and Control of Complex Physical Systems: The Port-Hamiltonian Approach", Springer-Verlag , 2009.

  6. Luigi Villani and J De Schutterm, "9", in: Force Control, Springer Handbook of Robotics , 2008.

  7. Daniel E Whitney, "Historical perspective and state of the art in robot force control", The International Journal of Robotics Research, vol. 6, no. 1, pp. 3--14, 1987.

  8. Samuel Hunt Drake, "Using compliance in lieu of sensory feedback for automatic assembly.", PhD thesis, Massachusetts Institute of Technology, 1978.

  9. Tomas Lozano-Perez and Matthew Mason and Russell Taylor, "Automatic synthesis of fine-motion strategies for robots", The International Journal of Robotics Research, vol. 3, no. 1, pp. 3--24, 1984.

  10. Christian Ott and Alin Albu-Schaffer and Andreas Kugi and Gerd Hirzinger, "On the passivity-based impedance control of flexible joint robots", IEEE Transactions on Robotics, vol. 24, no. 2, pp. 416--429, 2008.

  11. Alin Albu-Schaffer and Christian Ott and Gerd Hirzinger, "A unified passivity-based control framework for position, torque and impedance control of flexible joint robots", The international journal of robotics research, vol. 26, no. 1, pp. 23--39, 2007.

  12. Alin Albu-Schaffer and Gerd Hirzinger, "A globally stable state feedback controller for flexible joint robots", Advanced Robotics, vol. 15, no. 8, pp. 799--814, 2001.

Previous Chapter Table of contents Next Chapter