CSML Reading Group

The Princeton CSML Reading Group is a journal club that meets weekly on Friday at 5:30 p.m. in CSML 103 (26 Prospect Ave.). The group occasionally meets on Mondays at 5:30 p.m. as well. We discuss recent high-impact papers in the broad area of statistics and machine learning. The goal is to foster an in-depth discussion of the papers in an informal atmosphere. 

 


Join

To subscribe to our listserv and receive weekly email reminders about meetings:

Send an email to listserv [at] princeton.edu from the email you wish to have subscribed.
In the body of the email, write: SUB SMLPapers FirstName LastName

The subject should be blank. For more information, please see: helpdesk.princeton.edu

 


2020 Meetings

Date Presenter Topic Reading
Apr. 2 Michael/Ryan/(Other volunteers?) Data Science against COVID-19                                                                    

Bullock et al., Mapping the Landscape of Artificial Intelligence Applications against COVID-19

Chinazzi et al, The effect of travel restrictions on the spread of the 2019 novel coronavirus (COVID-19) outbreak

2019 Meetings

Date Presenter Topic Reading
Nov. 22 Michael Measuring Fairness in Machine Learning

Slides
Corbett-Davies and Goel, The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning (arXiv:1808.00023 2018)

Also discussed:
Nov. 15 Michael Guerzhoy Reconciling modern machine-learning practice and the classical bias–variance trade-off

Slides
Belkin et al., Reconciling modern machine-learning practice and the classical bias–variance trade-off (PNAS ,2019)

Lilian Weng, Are Deep Neural Networks Dramatically Overfitted? (2019)

Nov. 8 Ryan Lee Neural Machine Translation by Jointly Learning to Align and Translate  Bahdanau et al, Neural Machine Translation by Jointly Learning to Align and Translate (ICLR 2015)
Oct. 25 Michael Guerzhoy

 

 

 

Wasserstein GAN

Slides
Arjovsky et al, Wasserstein GAN (ICML 2017)

Depth First Learning: Wasserstein GAN (2019)

Read-through: Wasserstein GAN (2017)

Video interview with Martin Arjovsky

Oct. 18 Ryan Lee Generative Adversarial Networks Goodfellow, Generative Adversarial Networks tutorial (2016)
Oct 11 Michael Guerzhoy

 

Non-delusional Q-Learning (winner, Best Paper Award at NeurIPS 2018)
Slides
Lu et al., Non-delusional Q-learning and value-iteration (NeurIPS 2018)
Oct. 7 Michael Guerzhoy Introductory meeting, take 2 (repeat of Sept. 27)  
Sept. 27 Michael Guerzhoy
 
Organizational matters

A refresher on Q-learning

Slides
Ch. 6 of Sutton and Barto (2017)
Mnih et al, Human-level control through deep reinforcement learning (Nature, 2015)

 

2016-2018 Meetings

Date Presenter Topic Reading

Random matrices

05/03/18 Adam Charles Matrix concentration inequalities; application: short-term memory of linear recurrent neural networks
04/26/18 Mikio Aoi Random projections for least squares
04/19/18 Farhan Damani Low-rank matrix approximation
04/12/18 Greg Darnell Markov, Chebyshov, and Chernoff's inequalities; Johnson–Lindenstrauss lemma
04/05/18 Bianca Dumitrascu Motivation for random matrices; intro on concentration inequalities

Information geometry

03/15/18 Jordan Ash, Alex Beatson The Fisher-Rao metric and generalization of neural networks
03/01/18 Sidu Jena Information entropy and max entropy methods
02/08/18 Diana Cai, Bianca Dumitrascu Natural gradients, mirror descent and stochastic variational inference
02/08/18 Alex Beatson, Greg Gundersen Intro: information geometry, f-divergences, the Fisher metric, and the exponential family
01/25/18 Sidu Jena, Archit Verma Differential geometry overview

Reinforcement learning & control theory

11/16/17 Archit Verma Robust Control
11/09/17 Ari Seff Optimal Control
11/02/17 Sidu Jena, Max Wilson Control Theory Basics

10/26/17

Alex Beatson Actor-Critic

10/19/17

Niranjani Prasad, Gregory Gundersen Q-Learning
10/12/17 Ryan Adams Policy Gradient Methods

Misc. previous topics

6/22     

David Zoltowski, Mikio Aoi   Stochastic Gradient Descent as Approximate Bayesian Inference  Mandt, Hoffman, Blei (2017)

6/1

Stephen Keeley   Understanding deep convolutional networks Mallat (2016)

5/25

Davit Zoltowski   Variational Inference with Normalizing Flows Rezende, Mohamed (2016)

5/11

Jordan Ash   Generative Adversarial Nets Ian J. Goodfellow∗ , Jean Pouget-Abadie† , Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair‡ , Aaron Courville, Yoshua Bengio§ 

5/4

Greg Darnell   Convolutional Neural Networks Analyzed via Convolutional Sparse Coding Vardan Papyan, Yaniv Romano, Michael Elad (2016)

4/27

Bianca Dumitrascu  

Why does deep learning work so well? Henry W. Lin, Max Tegmark (2017)

4/20 Yuki Shiraito   Dropout as Bayesian approximation: Representing Model Uncertainty in Deep Learning Gal & Ghahramani (2016)
4/13 Mikio Aoi   A Probabilistic Theory of Deep Learning Ankit B. Patel, Tan Nguyen, Richard G. Baraniuk (2015)
3/30 Brian DePasquale   Semi-supervised Learning with Deep Generative Models Diederik P. Kingma, Danilo J. Rezende, Shakir Mohamed, Max Welling
3/16 Adam Charles   On the expressive power of deep learning: A tensor analysis Cohen, Sharir, Shashua (2016)
3/9 Mikio Aoi   Auto-Encoding Variational Bayes Kingma, Welling (2014)
3/2 Bianca Dumitrascu   Stochastic Backpropagation and Approximate Inference in Deep Generative Models Danilo Jimenez Rezende, Shakir Mohamed, Daan Wierstra (2014)
2/16 Nick Roy   Understanding deep learning requires rethinking generalization Zhang, Bengio, Hardt, Recht, Vinyals (2017)