Jędrzej Orbik

I am a software developer at Roboception, where I work on computer vision and machine learning for robotics. Previously research engineer at UC Berkeley Robotic AI and Learning Lab (RAIL) under the supervision of Sergey Levine.

Twitter  /  GitHub  /  CV

profile photo

I'm interested in reinforcement learning, machine learning, and image processing for robotics applications. Much of my research is about inferring the physical world for robotic control in wide variety of domains.

Don't Start From Scratch: Leveraging Prior Data to Automate Robotic Reinforcement Learning
Homer Walke, Jonathan Yang, Albert Yu, Aviral Kumar, Jędrzej Orbik, Avi Singh, Sergey Levine,
Conference on Robot Learning (CoRL), 2022
RSS Workshop on Learning from Diverse, Offline Data, 2022
arXiv / website

We demonstrate that incorporating prior data into robotic reinforcement learning enables autonomous learning, substantially improves sample-efficiency of learning, and enables better generalization. Our method learns new robotic manipulation skills directly from image observations and with minimal human intervention to reset the environment.

Fully Autonomous Real-World Reinforcement Learning with Applications to Mobile Manipulation
Charles Sun*, Jędrzej Orbik*, Coline Devin, Brian Yang, Ahbishek Gupta, Glen Berseth, and Sergey Levine
Conference on Robot Learning (CoRL), 2021
arXiv / blog post / website / code

We propose a reinforcement learning system that can learn mobile manipulation tasks continuously in the real world without any environment instrumentation, without human intervention, and without access to privileged information, such as maps, objects positions, or a global view of the environment.

Inverse reinforcement learning for dexterous hand manipulation
Jędrzej Orbik, Dongheui Lee, Alejandro Agostini,
IEEE International Conference on Development and Learning (ICDL), 2021
paper / website / source code

We identify that the learned rewards using existing IRL approaches are strongly biased towards demonstrated actions due to the scarcity of samples in the vast state-action space of dexterous manipulation applications. In this work we use statistical tools for random sample generation and reward normalization to reduce this bias. We show that this approach improves learning stability and robustness of policies learned with the inferred reward.

Human hand motion retargeting for dexterous robotic hand
Jędrzej Orbik, Shile Li, Dongheui Lee,
18th International Conference on Ubiquitous Robots (UR), 2021
paper / website

We propose a low-cost framework to map the human hand motion from a single RGB-D camera to a dexterous robotic hand. We incorporate neural network pose estimation and inverse kinematics for real-time hand motion retargeting. Empirically, the proposed framework can successfully perform grasping task imitations.