# Robotic Manipulation

Perception, Planning, and Control

Russ Tedrake

How to cite these notes, use annotations, and give feedback.

Note: These are working notes used for a course being taught at MIT. They will be updated throughout the Fall 2022 semester.

# Reinforcement Learning

These days, there is a lot of excitement around reinforcement learning (RL), and a lot of literature available. The scope of what one might consider to be a reinforcement learning algorithm has also broaden significantly. The classic (and now updated) and still best introduction to RL is the book by Sutton and Barto Sutton18 . For a more theoretical view, try Agarwal20b+Szepesvari10. There are some great online courses, as well: Stanford CS234, Berkeley CS285, DeepMind x UCL.

My goal for this chapter is to provide enough of the fundamentals to get us all on the same page, but to focus primarily on the RL ideas (and examples) that are particularly relevant for manipulation. And manipulation is a great playground for RL, due to the need for rich perception, for operating in diverse environments, and potentially with rich contact mechanics. Many of the core applied research results have been motivated by and demonstrated on manipulation examples!

# RL Software

There are now a huge variety of RL toolboxes available online, with widely varying levels of quality and sophistication. But there is one standard that has clearly won out as the default interface with which one should wrap up their simulator in order to connect to a variety of RL algorithms: the Gym.

It's worth taking a minute to appreciate the difference in the OpenAI Gym Environments (gym.Env) interface and the Drake System interface; I think it is very telling. My goal (in Drake), is to present you with a rich and beautiful interface to express and optimize dynamical systems, to expose and exploit all possible structure in the governing equations. The goal in Gym is to expose the absolute minimal details, so that it's possible to easily wrap every possible system under the same common interface (it doesn't matter if it's a robot, an Atari game, or even a compiler). Almost by definition, you can wrap any Drake system as a Gym Environment; I have examples of how to do it correctly here. You can also use any Gym environment in the Drake ecosystem; you just won't be able to apply some of the more advanced algorithms that Drake provides. Of course, I think that you should use Drake for your work in RL, too (many people do), because it provides the rich library of dynamical systems that are rigorously authored and tested, and leaves open the option to put RL approaches head-to-head against more model-based alternatives. I admit I might be a little biased. At any rate, that's the approach we will take in these notes.

Some people might argue that the more thoughtfully you model your system, the more assumptions you have baked in, making yourself susceptible to "sim2real" gaps; but I think that's simply not the case. Thoughtful modeling includes making uncertainty models that can account for as narrow or broad of a class of systems as we aim to explore; good things happen when we can make the uncertainty models themselves structured. I think one of the most fundamental challenges waiting for us at the intersection of reinforcement learning and control deeper understanding of the class of models that is rich enough to describe the diversity and richness of problems we are exploring in manipulation (e.g. with RGB cameras as inputs) while providing somewhat more structure that we can exploit with stronger algorithms and still allowing them to continually expand and improve with data.

callout to intuitive physics chapter once it exists

The OpenAI Gym provides an interface for RL environments, but doesn't provide the implementation of the actual RL algorithms. There are a large number of popular repositories for the algorithms, too. As of this writing, I would recommend Stable Baselines 3: it provides a very nice and thoughfully-documented set of implementations in PyTorch.

One other class of algorithms that is very relevant to RL but not specifically designed for RL is algorithms for black-box optimization. I quite like Nevergrad, and will also use that here.

# Using gradients of the policy, but not the environment

You can find more details on the derivation and some basic analysis of these algorithms here.

# REINFORCE, PPO, TRPO

Example of each of these algorithms on six-hump camel (with trivial environment/policy).

# Control for manipulation should be easy

This is a great time for theoretical RL + controls, with experts from controls embracing new techniques and insights from machine learning, and vice versa. As a simple example, we've increasingly come to understand that, even though the cost landscape for many classical control problems (like the linear quadratic regulator) is not convex in the typical policy parameters, we now understand that gradient descent still works for these problems (there are no local minima), and the class of problems/parameterizations for which we can make statements like this is growing rapidly.

# Model-based RL

MPC with learned models, typically represented with a neural network.

# Stochastic Optimization

For this exercise, you will implement a stochastic optimization scheme that does not require exact analytical gradients. You will work exclusively in . You will be asked to complete the following steps:

3. Prove that the expected value of the stochastic update does not change with baselines.
4. Implement stochastic gradient descent with baselines.

# REINFORCE

For this exercise, you will implement the vanilla REINFORCE algorithm on a box pushing task. You will work exclusively in . You will be asked to complete the following steps:

1. Implement the policy loss function.
2. Implement the value loss function.