john so

johnso <at> stanford <dot> edu

{ github | twitter | linkedin | resume }

I am currently an AI Resident at 1X. I am also an MS CS at Stanford University, where I'm fortunate to be advised by Shuran Song as a member of Stanford REAL (Robotics and Embodied Artificial Intelligence Lab). Previously, I finished my BS EECS at UC Berkeley, where I was fortunate to be advised by Pieter Abbeel, Stephen James, and Xingyu Lin as a part of Berkeley RLL (Robot Learning Lab).

research

My dream is for robots to become an everyday household occurrence; an important step is to enable robots to quickly adapt prior knowledge to new scenes or tasks. Towards this, I aim to answer two questions:

  1. How do we pre-train using out-of-distribution data, such as cross-embodiment demos or videos?
  2. How do we use in-domain robotics data to enable sample-efficient policy learning?

My recent interests are in capturing useful priors from multi-task video datasets for robot learning, such as motion and skills. Long term, I hope to leverage perspectives from cognitive science to inform how robots can learn from and like humans.

Any-point Trajectory Modeling for Policy Learning

Chuan Wen*, Xingyu Lin*, John So*, Qi Dou, Kai Chen, Yang Gao, Pieter Abbeel

We learn to predict the future trajectories of arbitrary points by pre-training on actionless videos. Using these trajectories for downstream policy learning, we demonstrate sample-efficient learning and cross-embodiment knowledge transfer.

{ paper | arXiv | website }

SpawnNet: Learning Generalizable Visuomotor Skills from Pre-trained Networks

Xingyu Lin*, John So*, Sashwat Mahalingam, Fangchen Liu, Pieter Abbeel

We extract dense features from pre-trained networks to learn generalizable manipulation skills. This shows improvements on categorical generalization against paradigms such as naively using pre-trained representations.

ICRA 2024 { paper | arXiv | website | code }

Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving Without Real Data

John So*, Amber Xie*, Sunggoo Jung, Jeffrey Edlund, Rohan Thakker, Ali Agha-mohammadi, Pieter Abbeel, Stephen James

We use photorealistic simulation to learn a segmentation model and train a navigation policy with RL in the learned segmentation space. We deploy zero-shot to a real vehicle.

CoRL 2022 { paper | arXiv | website | code }

teaching

miscellaneous

As an undergrad, I spent the majority of my time outside of research helping to build, organize, and lead Machine Learning at Berkeley (ML@B), serving as the organization's president in Fall 2022. We presented a white paper about our structure and initiatives at the NeurIPS 2022 Broadening Research Collaborations in ML Workshop; you may find a preprint here.


I also keep a blog about my thoughts and experiences. You can find it here.


I like to think about how to best teach, learn, and optimize for fulfillment. Shoot me an email or DM if you'd like to chat :)