CSML Reading Group

The Princeton CSML Reading Group is a journal club that meets weekly on Friday at 5:30 p.m. in CSML 103 (26 Prospect Ave.). The group occasionally meets on Mondays at 5:30 p.m. as well. We discuss recent high-impact papers in the broad area of statistics and machine learning. The goal is to foster an in-depth discussion of the papers in an informal atmosphere. 

Organized by Michael Guerzhoy (guerzhoy@princeton.edu).

 

 

Join

To subscribe to our listserv and receive weekly email reminders about meetings:

Send an email to listserv [at] princeton.edu from the email you wish to have subscribed.
In the body of the email, write: SUB SMLPapers FirstName LastName

The subject should be blank. For more information, please see: helpdesk.princeton.edu

 


2019 Meetings

Date Presenter Topic Reading
Sept. 27 Michael Guerzhoy

Slides
Organizational matters

A refresher on Q-learning
Ch. 6 of Sutton and Barto
Mnih et al, Human-level control through deep reinforcement learning (Nature, 2015)

 

Oct. 7 Michael Guerzhoy Introductory meeting, take 2 (repeat of Sept. 27)  
Oct 11

Michael Guerzhoy 

Slides

Non-delusional Q-Learning (winner, Best Paper Award at NeurIPS 2018) Lu et al., Non-delusional Q-learning and value-iteration (NeurIPS 2018)
Oct. 18 Ryan Lee Generative Adversarial Networks Goodfellow, Generative Adversarial Networks tutorial
Oct. 25 Michael Guerzhoy Wasserstein GAN Arjovsky et al, Wasserstein GAN
Nov. 1 Ryan Lee Neural Machine Translation by Jointly Learning to Align and Translate  Bahdanau et al, Neural Machine Translation by Jointly Learning to Align and Translate

Past Meetings

Date Presenter Topic Reading

Random matrices

05/03/18 Adam Charles Matrix concentration inequalities; application: short-term memory of linear recurrent neural networks
04/26/18 Mikio Aoi Random projections for least squares
04/19/18 Farhan Damani Low-rank matrix approximation
04/12/18 Greg Darnell Markov, Chebyshov, and Chernoff's inequalities; Johnson–Lindenstrauss lemma
04/05/18 Bianca Dumitrascu Motivation for random matrices; intro on concentration inequalities

Information geometry

03/15/18 Jordan Ash, Alex Beatson The Fisher-Rao metric and generalization of neural networks
03/01/18 Sidu Jena Information entropy and max entropy methods
02/08/18 Diana Cai, Bianca Dumitrascu Natural gradients, mirror descent and stochastic variational inference
02/08/18 Alex Beatson, Greg Gundersen Intro: information geometry, f-divergences, the Fisher metric, and the exponential family
01/25/18 Sidu Jena, Archit Verma Differential geometry overview

Reinforcement learning & control theory

11/16/17 Archit Verma Robust Control
11/09/17 Ari Seff Optimal Control
11/02/17 Sidu Jena, Max Wilson Control Theory Basics

10/26/17

Alex Beatson Actor-Critic

10/19/17

Niranjani Prasad, Gregory Gundersen Q-Learning
10/12/17 Ryan Adams Policy Gradient Methods

Misc. previous topics

6/22     

David Zoltowski, Mikio Aoi   Stochastic Gradient Descent as Approximate Bayesian Inference  Mandt, Hoffman, Blei (2017)

6/1

Stephen Keeley   Understanding deep convolutional networks Mallat (2016)

5/25

Davit Zoltowski   Variational Inference with Normalizing Flows Rezende, Mohamed (2016)

5/11

Jordan Ash   Generative Adversarial Nets Ian J. Goodfellow∗ , Jean Pouget-Abadie† , Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair‡ , Aaron Courville, Yoshua Bengio§ 

5/4

Greg Darnell   Convolutional Neural Networks Analyzed via Convolutional Sparse Coding Vardan Papyan, Yaniv Romano, Michael Elad (2016)

4/27

Bianca Dumitrascu  

Why does deep learning work so well? Henry W. Lin, Max Tegmark (2017)

4/20 Yuki Shiraito   Dropout as Bayesian approximation: Representing Model Uncertainty in Deep Learning Gal & Ghahramani (2016)
4/13 Mikio Aoi   A Probabilistic Theory of Deep Learning Ankit B. Patel, Tan Nguyen, Richard G. Baraniuk (2015)
3/30 Brian DePasquale   Semi-supervised Learning with Deep Generative Models Diederik P. Kingma, Danilo J. Rezende, Shakir Mohamed, Max Welling
3/16 Adam Charles   On the expressive power of deep learning: A tensor analysis Cohen, Sharir, Shashua (2016)
3/9 Mikio Aoi   Auto-Encoding Variational Bayes Kingma, Welling (2014)
3/2 Bianca Dumitrascu   Stochastic Backpropagation and Approximate Inference in Deep Generative Models Danilo Jimenez Rezende, Shakir Mohamed, Daan Wierstra (2014)
2/16 Nick Roy   Understanding deep learning requires rethinking generalization Zhang, Bengio, Hardt, Recht, Vinyals (2017)