Michael Luo

I am a 5th year Masters student (2020-2021) in the UC Berkeley EECS department studying Artificial Intelligence, Systems, and Robotics. I am advised by Prof. Ion Stoica and am associated with Berkeley Artificial Intelligence Research (BAIR) and RISELab. I have also recently done research under Ken Goldberg in AUTOLab.

Before the Masters program, I earned my B.S. from UC Berkeley with a double major in EECS and Business Administration. Outside of school, I contribute to a popular distributed RL library Ray/RLlib with over 14k stars on Github. In my free time, I enjoy a mix of hobbies, which include powerlifting, playing the piano, and competitive programming.

Email  /  CV  /  Google Scholar  /  LinkedIn  /  Github

profile photo
Research

My masters's and undergraduate research focuses on practical problems that RL faces in real life. This includes scaling up data collection with distributed RL, learning safely with safe RL, and adapting quickly to real-life enviornments with meta RL. I am also interested in applying RL to beat existing algorithms and models in niche fields, such as NLP, query optimization for databases, and video streaming.

project image IMPACT: Importance Weighted Asynchronous Architectures with Clipped Target Networks
Michael Luo, Jiahao Yao, Richard Liaw, Eric Liang, Ion Stoica
Accepted to International Conference on Learning Representations (ICLR), 2020
Arxiv | Video | Code

An algorithm for distributed reinforcement learning that tunes the tradeoff between distributed data collection and learning sample efficiency to optimize for training speed by combining the sample efficiency of PPO and the data throughput from IMPALA.

Connecting Context-specific Adaptation in Humans to Meta-learning
Rachit Dubey*, Erin Grant*, Michael Luo*, Karthik Narasimhan, Thomas L. Griffiths
Under Review at Conference on Artifical Intelligence (AAAI), 2021.
Arxiv | Code

We introduce a framework for using contextual information about a task to guide the initialization of task-specific models before adaptation to online feedback, which leads to faster adaptation to online feedback than that of zero-shot multitask approaches.

Recovery RL: Safe Reinforcement Learning with Learned Recovery Zones
Brijen Thananjeyan*, Ashwin Balakrishna*, Suraj Nair, Michael Luo, Krishnan Srinivasan, Minho Hwang, Joseph E. Gonzalez, Julian Ibarz, Chelsea Finn, Ken Goldberg
Accepted to NeurIPS Robot Learning Workshop, 2020.
Website | Arxiv | Code

An algorithm for safe reinforcement learning which utilizes a set of offline data to learn about constraints before policy learning and a pair of policies which separate the often conflicting objectives of task directed exploration and constraint satisfaction to learn contact rich and visuomotor control tasks.

Distributed Reinforcement Learning is a Dataflow Problem
Eric Liang*, Zhanghao Wu*, Michael Luo, Sven Mika, Ion Stoica
Under Review at Machine Learning and Systems (MLSys) 2021.
Paper | Code (In RLlib)

We propose RLFlow, a hybrid actor-dataflow programming model for distributed RL, that leads to highly composable and performant implementations of RL algorithms, which results to faster training and significant code reductions.

Discovering Autoregressive Orderings with Variational Inference
Xuanlin Li*, Brandon Trabucco*, Dong Huk Park, Yang Gao, Michael Luo, Sheng Shen, Trevor Darrell
Under Review at International Conference on Learning Representations (ICLR) 2021. (Reviews in Top 7% of Submissions)
Paper

We propose the first domain-independent unsupervised / self-supervised learner that discovers high-quality autoregressive orders through fully-parallelizable end-to-end training without domain-specific tuning.

LazyDAgger: Reducing Context Switching in Interactive Robot Imitation Learning
Ryan Hoque, Ashwin Balakrishna, Brijen Thanajeyan, Carl Putterman, Michael Luo, Daniel Seita, Daniel S. Brown, Ken Goldberg
Under Review at International Conference on Robotics and Automation (ICRA) 2021.
Website | Paper

An algorithm for interactive imitation learning that learns to minimize human context switching through sustained interventions and maintains the same supervisor burden for prior algorithms.

AlphaGarden: Learning Seed Placement and Automation Policies For Polyculture Farming with Companion Plants
Yahav Avigal, Anna Deza, William Wong, Sebastian Oehme, Mark Presten, Mark Theis, Jackson Chui, Paul Shao, Huang Huang, Atsunobu Kotani, Satvik Sharma, Michael Luo, Stefano Carpin, Joshua Viers, Stavros Vougioukas, Ken Goldberg
Under Review at International Conference on Robotics and Automation (ICRA) 2021.
Website | Paper | Code

We investigate different seed placement and pruning algoritms in a polyculture garden simulator to jointly maximize diveristy and coverage of various plants types.

Teaching

CS 189: Introduction to Machine Learning
Teaching Assistant: Fall 2019

CS 162: Operating Systems and System Programming
Reader: Fall 2018

Work Experience
Anyscale
Software Development Intern
Amazon
Software Development Intern
Cisco Meraki
Computer Vision Intern

Website template from Jon Barron