Kay - Liyiming Ke
Hi 👋 I work at Physical Intelligence
researching on Machine Learning for Robot Manipulation.
During my PhD at University of Washington, I built a chopsticks-welding robot to showcase data-driven fine
motor skills.
My path to robotics started unconventionally—I majored in Economics before diving into AI, with internships at
Meta AI, Microsoft Research, and Google Search along the way. I’m driven by curiosity and currently I aim to
design robot policies that master Robustness, Precision, and Dexterity.
Liyiming Ke is a full stack robotist at Physical Intelligence researching on Machine Learning for Robot Manipulation. She earned her Ph.D. from the University of Washington with her thesis titled "Data-driven Fine Manipulation". She built a chopsticks-welding robot that demonstrate fine motor skills and developed theoretical frameworks for robot learning. She has led human-robot interactive demonstration at AAAS in 2020 and has been selected as one of the Rising Stars in EECS 2023.
Formal Bio G. Scholar Github LinkedIn Twitter
kay at workplace dot company
π0: A Vision-Language-Action Flow Model for General Robot Control
Kevin Black, Noah Brown, Danny Driess, Adnan Esmail, Michael Equi, Chelsea Finn, Niccolo Fusai,
Lachy Groom, Karol Hausman, Brian Ichter, Szymon Jakubczak, Tim Jones, Liyiming Ke, Sergey Levine,
Adrian Li-Bell, Mohith Mothukuri, Suraj Nair, Karl Pertsch, Lucy Xiaoyang Shi, James Tanner, Quan Vuong,
Anna Walling, Haohuan Wang, Ury Zhilinsky
PDF •
Summary
Can you train cross-embodiment robotic policies over many many tasks and expect it to work? We show that
it is promising: a big pre-training model can be finetuned on a single task and outperform
dedicated policy that has only seen task-specific data.
Overcoming the Sim-to-Real Gap: Leveraging Simulation to Learn to Explore for Real-World RL
Andrew Wagenmaker, Kevin Huang, Liyiming Ke, Byron Boots, Kevin Jamieson, Abhishek Gupta
NeurIPS 2024
PDF •
Summary
We show that, learning an exploration policy in simulation can boost the real-world reinforcement
learning
finetuning efficiency (versus learning an optimal policy in the sim and transfer the policy).
Data Efficient Behavior Cloning for Fine Manipulation via Continuity-based Corrective Labels
Abhay Deshpande, Liyiming Ke, Quinn Pfeifer, Abhishek Gupta, Siddhartha S. Srinivasa
In submission 2024
Webpage •
PDF •
Summary
We apply CCIL to real world robotic manipulation tasks and it kinda worked after some design tweak. The
most juice comes from setting up trust threshold for the generated labels in a task-agnostic way.
CCIL: Continuity-based Data Augmentation for Corrective Imitation Learning
Liyiming Ke*, Yunchu Zhang*, Abhay Deshpande, Siddhartha Srinivasa, Abhishek Gupta
International Conference on Learning Representations (ICLR) 2024
Webpage •
Code •
PDF •
Summary
Enhances robustness of imitation learning by generating synthetic corrective labels:
The trick is to leverage local continuity in the environment dynamics - and for regions that are
discontinuous, quantify the confidence and skip them.
Cherry Picking with Reinforcement Learning
Yunchu Zhang*, Liyiming Ke*, Abhay Deshpande, Abhishek Gupta, Siddhartha Srinivasa
Robotics Science and Systems (RSS) 2023
Webpage •
PDF •
Summary
Use reinforcement learning to learn fine motor skills: pick up slippery cherries with chopsticks under
wind or human disturbances. And I refuse to do parameter sweeping or random seed cherry picking.
Real World Offline Reinforcement Learning with Realistic Data Sources
Gaoyue Zhou*, Liyiming Ke*, Siddhartha Srinivasa, Abhinav Gupta, Aravind Rajeswaran, Vikash Kumar
IEEE International Conference on Robotics and Automation (ICRA) 2023
Webpage •
PDF •
Summary
Eval offline RL in real-world: emphasize on data being "kinda good" but not perfect.
Grasping with Chopsticks: Combating Covariate Shift in Model-free Imitation Learning for Fine Manipulation
Liyiming Ke, Jingqiang Wang, Tapomayukh Bhattacharjee, Byron Boots, Siddhartha S. Srinivasa
IEEE International Conference on Robotics and Automation (ICRA) 2021
PDF •
Summary
Teach a robot to use chopsticks for precise manipulation tasks through human demonstrations: Addresses
covariate shift in imitation learning by noise-injection, object-centric transformation and
bunch of hacks.
Telemanipulation with Chopsticks: Analyzing Human Factors in User Demonstrations
Liyiming Ke, Ajinkya Kamat, Jingqiang Wang, Tapomayukh Bhattacharjee, Christoforos Mavrogiannis,
Siddhartha S. Srinivasa
IEEE International Conference on Intelligent Robots and Systems (IROS) 2020
PDF •
Summary
Built a chopsticks robot and a fun human-interactive demo collection interface: turns out that tracking
a
wand and commmand the robot can be really easy.
Imitation Learning as f-Divergence Minimization
Liyiming Ke, Sanjiban Choudhury, Matt Barnes, Wen Sun, Gilwoo Lee, Siddhartha Srinivasa
International Workshop on the Algorithmic Foundations of Robotics (WAFR) 2020
PDF •
Summary
A unified theoretical framework for imitation learning! Turns out some SOTA algorithms are using
f-divergence. We show how different divergence measures lead to different imitation learning approaches.
Tactical Rewind: Self-Correction via Backtracking in Vision-and-Language Navigation
Liyiming Ke, Xiujun Li, Yonatan Bisk, Ari Holtzman, Zhe Gan, Jingjing Liu, Jianfeng Gao, Yejin Choi,
Siddhartha Srinivasa
IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019
★ Oral Presentation, CVPR (5.6%) ★
PDF •
Summary
Baking Search and Planning into ML-based navigation: We propose a new framework for VL navigation,
enabling agents to recover from mistakes by maintaining internal search tree and returning to previous
positions and trying alternative
paths.
Behavioral Experiments in Email Filter Evasion
Liyiming Ke, Bo Li, Yevgeniy Vorobeychik
AAAI Conference on Artificial Intelligence (AAAI) 2016
PDF •
Summary
Studies how humans attempt to evade email spam filters.
Provides insights into adversarial behavior and implications for security system design.
2024
OpenAI Reading Group2023
Stanford University, ILIAD Lab2022
Cornell University, EmPRISE Lab2021
MetaAI Reading Group2018
Microsoft Research Dialogue Group Reading GroupReviewer of AAMAS, CoRL, HRI, ICLR, ICRA, IJRR, IROS, NeurIPS, RA-L
2023
Honored to be selected as one of the Rising Stars in EECS2020
Chopsticks Robot featured on IEEE Spectrum Video Friday2020
Led a human-robot interactive demo at the AAAS gathering2017
Graduated as one of the Honor Scholars from Vanderbilt University2015
First prize in the Vanderbilt Student Consulting for Non-profit Organization-
Inspired by: