IFOR: Iterative Flow Minimization for Robotic Object Rearrangement

Ankit Goyal1,2
Arsalan Mousavian1
Chris Paxton1
Yu-Wei Chao1
Brian Okorn1
Jia Deng2
Dieter Fox1
2Princeton University


● Robotic system for vision based object rearragement system; Iteratively minimizes flow.

● Input is RGBD images of the current and target scene; No privilaged information used.

● Trained entirely in simulation; Zero-shot transfer to real world.

● Tested on objects not seen during training.


Paper thumbnail.

IFOR: Iterative Flow Minimization for Robotic Object Rearrangement

Ankit Goyal, Arsalan Mousavian, Chris Paxton, Yu-Wei Chao, Brian Okorn, Jia Deng and Dieter Fox
  title={IFOR: Iterative Flow Minimization for Robotic Object Rearrangement},
  author={Goyal, Ankit and  Mousavian, Arsalan and Paxton, Chris and Chao, Yu-Wei and Okorn, Brian and Deng, Jia and Fox, Dieter},


Accurate object rearrangement from vision is a crucial problem for a wide variety of real-world robotics applications in unstructured environments. We propose IFOR, an end-to-end method for the challenging problem of object rearrangement for unknown objects given an RGBD image of the original and final scenes. First, we learn an optical flow model to estimate the relative transformation of the objects purely from synthetic data. This flow is then used in an iterative minimization algorithm to achieve accurate positioning of previously unseen objects. Crucially, we show that our method applies to cluttered scenes, and in the real world, while training only on synthetic data.


IFOR takes as input RGB+D images of a current and a goal scene, and uses these to make predictions as to which objects should move and by which transformations, using RAFT to estimate optical flow. This is then sent to a robot planning and execution pipeline which is capable of grasping of unknown objects and motion planning in scenes with unknown geometry.

Real World Examples

Synthetic Examples

Comparison with Prior Work

Website template from here.