CSML Reading Group

The Princeton CSML Reading Group is a journal club that meets weekly on Thursdays at 2:30–4pm in 26 Prospect. The purpose of this group is to systematically learn new topics—be they foundational or latest ideas—related to machine learning, statistics, and data analysis.

Organized by Alex Beatson and Gregory Gundersen.

 


Meetings

Time:  Thursdays at 2:30pm

Location: 26 Prospect Ave Classroom (room 105)


Join

To subscribe to our listserv and receive weekly email reminders about meetings:

Send an email to listserv [at] princeton.edu from the email you wish to have subscribed.
In the body of the email, write: SUB csml-reading FirstName LastName

The subject should be blank. For more information, please see: helpdesk.princeton.edu

 


Recent Meetings

Date Presenter Topic Reading

Random matrices

04/19/18 Farhan Damani Low-rank matrix approximation
04/12/18 Greg Darnell Markov, Chebyshov, and Chernoff's inequalities; Johnson–Lindenstrauss lemma
04/05/18 Bianca Dumitrascu Motivation for random matrices; intro on concentration inequalities

Information geometry

03/15/18 Jordan Ash, Alex Beatson The Fisher-Rao metric and generalization of neural networks
03/01/18 Sidu Jena Information entropy and max entropy methods
02/08/18 Diana Cai, Bianca Dumitrascu Natural gradients, mirror descent and stochastic variational inference
02/08/18 Alex Beatson, Greg Gundersen Intro: information geometry, f-divergences, the Fisher metric, and the exponential family
01/25/18 Sidu Jena, Archit Verma Differential geometry overview

Past Meetings

Date Presenter Topic Reading

Reinforcement learning & control theory

11/16/17 Archit Verma Robust Control
11/09/17 Ari Seff Optimal Control
11/02/17 Sidu Jena, Max Wilson Control Theory Basics

10/26/17

Alex Beatson Actor-Critic

10/19/17

Niranjani Prasad, Gregory Gundersen Q-Learning
10/12/17 Ryan Adams Policy Gradient Methods

Misc. previous topics

6/22     

David Zoltowski, Mikio Aoi   Stochastic Gradient Descent as Approximate Bayesian Inference  Mandt, Hoffman, Blei (2017)

6/1

Stephen Keeley   Understanding deep convolutional networks Mallat (2016)

5/25

Davit Zoltowski   Variational Inference with Normalizing Flows Rezende, Mohamed (2016)

5/11

Jordan Ash   Generative Adversarial Nets Ian J. Goodfellow∗ , Jean Pouget-Abadie† , Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair‡ , Aaron Courville, Yoshua Bengio§ 

5/4

Greg Darnell   Convolutional Neural Networks Analyzed via Convolutional Sparse Coding Vardan Papyan, Yaniv Romano, Michael Elad (2016)

4/27

Bianca Dumitrascu  

Why does deep learning work so well? Henry W. Lin, Max Tegmark (2017)

4/20 Yuki Shiraito   Dropout as Bayesian approximation: Representing Model Uncertainty in Deep Learning Gal & Ghahramani (2016)
4/13 Mikio Aoi   A Probabilistic Theory of Deep Learning Ankit B. Patel, Tan Nguyen, Richard G. Baraniuk (2015)
3/30 Brian DePasquale   Semi-supervised Learning with Deep Generative Models Diederik P. Kingma, Danilo J. Rezende, Shakir Mohamed, Max Welling
3/16 Adam Charles   On the expressive power of deep learning: A tensor analysis Cohen, Sharir, Shashua (2016)
3/9 Mikio Aoi   Auto-Encoding Variational Bayes Kingma, Welling (2014)
3/2 Bianca Dumitrascu   Stochastic Backpropagation and Approximate Inference in Deep Generative Models Danilo Jimenez Rezende, Shakir Mohamed, Daan Wierstra (2014)
2/16 Nick Roy   Understanding deep learning requires rethinking generalization Zhang, Bengio, Hardt, Recht, Vinyals (2017)