Words2Contact:
Identifying Support Contacts from Verbal Instructions Using Foundation Models

Dionis Totsila             Quentin Rouxel             Serena Ivaldi             Jean-Baptiste Mouret             Inria, CNRS, Université de Lorraine

Abstract

This paper presents Words2Contact, a language- guided multi-contact placement pipeline leveraging large language models and vision language models. Our method is a key component for language-assisted teleoperation and human-robot cooperation, where human operators can instruct the robots where to place their support contacts before whole-body reaching or manipulation using natural language. Words2Contact transforms the verbal instructions of a human operator into contact placement predictions; it also deals with iterative corrections, until the human is satisfied with the contact location identified in the robot’s field of view. We benchmark state-of-the-art LLMs and VLMs for size and performance in contact prediction. We demonstrate the effectiveness of the iterative correction process, showing that users, even naive, quickly learn how to instruct the system to obtain accurate locations. Finally, we validate Words2Contact in real-world experiments with the Talos humanoid robot, instructed by human operators to place support contacts on different locations and surfaces to avoid falling when reaching for distant objects.

System Prompts

BibTeX


      @INPROCEEDINGS{10769902,
      author={Totsila, Dionis and Rouxel, Quentin and Mouret, Jean-Baptiste and Ivaldi, Serena},
      booktitle={2024 IEEE-RAS 23rd International Conference on Humanoid Robots (Humanoids)},
      title={Words2Contact: Identifying Support Contacts from Verbal Instructions Using Foundation Models},
      year={2024},
      volume={{}},
      number={{}},
      pages={9-16},
      keywords={Accuracy; Large language models; Pipelines; Natural languages; Humanoid robots; Transforms; Benchmark testing; Iterative methods; Surface treatment},
      doi={10.1109/Humanoids58906.2024.10769902}}