Task specification for robotic manipulation in open-world environments is inherently challenging.
Importantly, this process requires flexible and adaptive objectives that align with human intentions
and can evolve through iterative feedback. We introduce Iterative Keypoint Reward (IKER), a framework
that leverages VLMs to generate and refine visually grounded reward functions serving as dynamic task
specifications for multi-step manipulation tasks. Given RGB-D observations and free-form language
instructions, IKER samples keypoints from the scene and utilizes VLMs to generate Python-based reward
functions conditioned on these keypoints. These functions operate on the spatial relationships
between keypoints, enabling precise SE(3) control and leveraging VLMs as proxies to encode human
priors about robotic behaviors. We reconstruct real-world scenes in simulation and use the generated
rewards to train reinforcement learning policies, which are then deployed into the real world—forming
a real-to-sim-to-real loop. Our approach demonstrates notable capabilities across diverse scenarios,
including both prehensile and non-prehensile tasks, showcasing multi-step task execution, spontaneous
error recovery, and on-the-fly strategy adjustments. The results highlight IKER's effectiveness
in enabling robots to perform multi-step tasks in dynamic environments through iterative reward shaping
and minimal interaction.