Physical manipulation planning with differentiable closed-loop manipulation primitives

Courtesy of Marc Toussaint

Principal Investigators:

Marc Toussaint

Team Members

Danny Driess (Doctoral researcher)

Reuniting planning and reacting

Research Unit 3, SCIoI Project 39

Optimization-based Task and Motion Planning (TAMP) approaches show remarkable capabilities in finding paths given a scene description and (intuitive) physics models. However, the result of a TAMP algorithm is usually an open-loop trajectory, which, when executed in the real world, is likely to fail under disturbances or other sources of
uncertainty. The objective of this project is to bring the generality and computational strength of our TAMP framework to real-world execution. Instead of trying to find controllers that execute a planning result, this project investigates a different, novel approach: Can the behavior of closed-loop (perception-based) control primitives be directly embedded into the planning framework itself such that the result is a sequence of reactive closed-loop control policies instead of open-loop paths?  We aim to demonstrate the robustness of our control strategies to severe perturbations, e.g., human interventions, in real-world sequential manipulation tasks such as the escape room scenario.

 

Related Publications

Driess, D., Ha, J.-S., & Toussaint, M. (2021). Learning to solve sequential physical reasoning problems from a scene image. The International Journal of Robotics Research, 40(12–14), 1435–1466. https://doi.org/10.1177/02783649211056967
Driess, D., Ha, J.-S., Toussaint, M., & Tedrake, R. (2021). Learning Models as Functionals of Signed-Distance Fields for Manipulation Planning. CoRL 2021. http://arxiv.org/abs/2110.00792
Driess, D., Huang, Z., Li, Y., Tedrake, R., & Toussaint, M. (2022). Learning Multi-Object Dynamics with Compositional Neural Radiance Fields. CoRL 2022. https://dannydriess.github.io/compnerfdyn/
Driess, D., Schubert, I., Florence, P., Li, Y., & Toussaint, and M. (2022). Reinforcement Learning with Neural Radiance Fields. NeurIPS 2022. https://dannydriess.github.io/nerf-rl
Ha, J.-S., Driess, D., & Toussaint, M. (2022). Deep Visual Constraints: Neural Implicit Models for Manipulation Planning from Visual Input. IEEE Robotics and Automation Letters. https://doi.org/10.1109/LRA.2022.3194955
Harris, J., Driess, D., & Toussaint, M. (2022). FC3: Feasibility-Based Control Chain Coordination. IROS 2022. https://arxiv.org/pdf/2205.04362
Toussaint, M., Harris, J., Ha, J.-S., Driess, D., & Hönig, W. (2022). Sequence-of-Constraints MPC: Reactive Timing-Optimal Control of Sequential Manipulation. arxiv.:2203.05390. https://www.user.tu-berlin.de/mtoussai/22-SecMPC/
Toussaint, M., Harris, J., Ha, J.-S., Driess, D., & Hönig, W. (2022). Sequence-of-Constraints MPC: Reactive Timing-Optimal Control of Sequential Manipulation. IROS 2022. https://www.user.tu-berlin.de/mtoussai/22-SecMPC/