Marc Toussaint

TU Berlin, Robotics

Marc Toussaint leads the Learning & Intelligent Systems Lab. For SCIoI, he represents the synthetic disciplines at the intersections of AI planning, machine learning, and robotics. In his view, a key in understanding and creating intelligence is the interplay of learning and reasoning, where learning becomes the enabler for strongly generalizing reasoning and acting in our physical world. Within SCIoI, he is interested in studying computational methods and representations to enable efficient learning and general purpose physical reasoning, and demonstrating such capabilities on real-world robotic systems.
At SCIoI, he is currently working on Project 30, Project 39, Project 46.





SCIoI Publications

Toussaint, M., Harris, J., Ha, J.-S., Driess, D., & Hönig, W. (2022). Sequence-of-Constraints MPC: Reactive Timing-Optimal Control of Sequential Manipulation. IROS 2022.
Ortiz-Haro, J., Ha, J.-S., Driess, D., & Toussaint, M. (2022). Structured deep generative models for sampling on constraint manifolds in sequential manipulation. CoRL 2021.
Ortiz-Haro, J., Karpas, E., Katz, M., & Toussaint, M. (2022). A Conflict-driven Interface between Symbolic Planning and Nonlinear Constraint Solving. IEEE Robotics and Automation Letters.
Harris, J., Driess, D., & Toussaint, M. (2022). FC3: Feasibility-Based Control Chain Coordination. IROS 2022.
Kamat, J., Ortiz-Haro, J., Toussaint, M., Pokorny, F. T., & Orthey, A. (2022). BITKOMO: Combining Sampling and Optimization for Fast Convergence in Optimal Motion Planning. IROS 2022.
Ha, J.-S., Driess, D., & Toussaint, M. (2022). Deep Visual Constraints: Neural Implicit Models for Manipulation Planning from Visual Input. IEEE Robotics and Automation Letters.
Driess, D., Schubert, I., Florence, P., Li, Y., & Toussaint, and M. (2022). Reinforcement Learning with Neural Radiance Fields. NeurIPS 2022.
Driess, D., Huang, Z., Li, Y., Tedrake, R., & Toussaint, M. (2022). Learning Multi-Object Dynamics with Compositional Neural Radiance Fields. CoRL 2022.
Schubert, I., Driess, D., Oguz, O. S., & Toussaint, M. (2021). Learning to Execute: Efficient Learning of Universal Plan-Conditioned Policies in Robotics. NeurIPS 2021.
Driess, D., Ha, J.-S., & Toussaint, M. (2021). Learning to solve sequential physical reasoning problems from a scene image. The International Journal of Robotics Research, 40(12–14), 1435–1466.
Driess, D., Ha, J.-S., Toussaint, M., & Tedrake, R. (2021). Learning Models as Functionals of Signed-Distance Fields for Manipulation Planning. CoRL 2021.