I am a machine learning PhD student at MIT CSAIL with Antonio Torralba and Phillip Isola. My research focuses on the structures in learned intelligence:
- Understand how learning algorithms rely on structures/signals in data to produce models.
- Improve efficiency and generality of learned perception & reasoning by incorporating new useful structures.
Broadly, I am interested in representation learning, reinforcement learning, synthetic data, and dataset distillation.
During PhD, I have spent time at Meta AI working with Yuandong Tian, Amy Zhang, and Simon S. Du. I also collaborate with Alyosha Efros and Jun-Yan Zhu.
Before MIT, I was an early member of the PyTorch core team at Facebook AI Research (now Meta AI) (2017-2019). I completed my undergradute study at UC Berkeley (2013-2017), where I started my research with Stuart Russell, Ren Ng, and Alyosha Efros on probabilistic inference, graphics, and image generative models.
At MIT, I helped develop the 6.S898 Deep Learning course, and served as the head TA.
Click here for my CV.
Selected Open Source Projects 
-
PyTorch core developer (
2017 -
2019; team size <10)
Data loading, CUDA/CPU kernels, ML ops, API design, autograd optimization, Python binding, etc. -
CycleGAN and pix2pix in PyTorch maintainer (2018 - now)
-
torchreparam
developer (2019 - 2020)
One of the earliest PyTorch toolkits to re-parametrize neural nets (e.g., for hyper-nets and meta-learning). -
Awesome-Dataset-Distillation
maintainer (2022 - now)
Collection of Dataset Distillation papers in machine learning and vision conferences. -
torchqmet
developer (2022 - now)
PyTorch toolkit for SOTA quasimetric learning.
See below for open source code for my researches.
Selected Publications (full list)
-
Optimal Goal-Reaching Reinforcement Learning via Quasimetric Learning
[ICML 2023][Project Page] [arXiv] [Code Coming Soon]
Tongzhou Wang, Antonio Torralba, Phillip Isola, Amy ZhangQuasimetric Geometry
+
(Push apart start state and goal
while maintaining local distances)= Optimal Value $V^*$
AND
High-Performing
Goal-Reaching Agents -
Improved Representation of Asymmetrical Distances with Interval Quasimetric Embeddings
[NeurIPS 2022 NeurReps Workshop] [Project Page] [arXiv] [PyTorch Package for Quasimetric Learning]
Tongzhou Wang, Phillip Isola -
Denoised MDPs: Learning World Models Better Than The World
[ICML 2022] [Project Page] [arXiv] [code]
Tongzhou Wang, Simon S. Du, Antonio Torralba, Phillip Isola, Amy Zhang, Yuandong Tian -
On the Learning and Learnability of Quasimetrics
[ICLR 2022] [Project Page] [arXiv] [OpenReview] [code]
Tongzhou Wang, Phillip Isola -
Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere
[ICML 2020] [Project Page] [arXiv] [code]
Tongzhou Wang, Phillip Isola# bsz : batch size (number of positive pairs) # d : latent dim # x : Tensor, shape=[bsz, d] # latents for one side of positive pairs # y : Tensor, shape=[bsz, d] # latents for the other side of positive pairs def align_loss(x, y, alpha=2): return (x - y).norm(p=2, dim=1).pow(alpha).mean()
def uniform_loss(x, t=2): return torch.pdist(x, p=2).pow(2).mul(-t).exp().mean().log()PyTorch implementation of the alignment and uniformity losses -
Dataset Distillation
[Project Page] [arXiv] [code] [DD Papers]
Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba, Alexei A. Efros -
Meta-Learning MCMC Proposals
[NeurIPS 2018] [PROBPROG 2018] [ICML 2017 AutoML Workshop Oral] [arXiv]
Tongzhou Wang, Yi Wu, David A. Moore, Stuart J. Russell -
Learning to Synthesize a 4D RGBD Light Field from a Single Image
[ICCV 2017] [arXiv]
Pratul Srinivasan, Tongzhou Wang, Ashwin Sreelal, Ravi Ramamoorthi, Ren Ng