Join
To subscribe to our listserv and receive weekly email reminders about meetings:
Send an email to listserv [at] princeton.edu from the email you wish to have subscribed.
In the body of the email, write: SUB csml-reading FirstName LastName
The subject should be blank. For more information, please see: helpdesk.princeton.edu
Recent Meetings
Date | Presenter | Topic | Reading | |
---|---|---|---|---|
Random matrices |
||||
05/03/18 | Adam Charles | Matrix concentration inequalities; application: short-term memory of linear recurrent neural networks |
|
|
04/26/18 | Mikio Aoi | Random projections for least squares |
|
|
04/19/18 | Farhan Damani | Low-rank matrix approximation |
|
|
04/12/18 | Greg Darnell | Markov, Chebyshov, and Chernoff's inequalities; Johnson–Lindenstrauss lemma |
|
|
04/05/18 | Bianca Dumitrascu | Motivation for random matrices; intro on concentration inequalities |
|
|
Information geometry |
||||
03/15/18 | Jordan Ash, Alex Beatson | The Fisher-Rao metric and generalization of neural networks |
|
|
03/01/18 | Sidu Jena | Information entropy and max entropy methods |
|
|
02/08/18 | Diana Cai, Bianca Dumitrascu | Natural gradients, mirror descent and stochastic variational inference |
|
|
02/08/18 | Alex Beatson, Greg Gundersen | Intro: information geometry, f-divergences, the Fisher metric, and the exponential family |
|
|
01/25/18 | Sidu Jena, Archit Verma | Differential geometry overview |
|
Past Meetings
Date | Presenter | Topic | Reading | |
---|---|---|---|---|
Reinforcement learning & control theory |
||||
11/16/17 | Archit Verma | Robust Control |
|
|
11/09/17 | Ari Seff | Optimal Control |
|
|
11/02/17 | Sidu Jena, Max Wilson | Control Theory Basics |
|
|
10/26/17 |
Alex Beatson | Actor-Critic |
|
|
10/19/17 |
Niranjani Prasad, Gregory Gundersen | Q-Learning |
|
|
10/12/17 | Ryan Adams | Policy Gradient Methods |
|
|
Misc. previous topics |
||||
6/22 |
David Zoltowski, Mikio Aoi | Stochastic Gradient Descent as Approximate Bayesian Inference Mandt, Hoffman, Blei (2017) | ||
6/1 |
Stephen Keeley | Understanding deep convolutional networks Mallat (2016) | ||
5/25 |
Davit Zoltowski | Variational Inference with Normalizing Flows Rezende, Mohamed (2016) | ||
5/11 |
Jordan Ash | Generative Adversarial Nets Ian J. Goodfellow∗ , Jean Pouget-Abadie† , Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair‡ , Aaron Courville, Yoshua Bengio§ | ||
5/4 |
Greg Darnell | Convolutional Neural Networks Analyzed via Convolutional Sparse Coding Vardan Papyan, Yaniv Romano, Michael Elad (2016) | ||
4/27 |
Bianca Dumitrascu |
Why does deep learning work so well? Henry W. Lin, Max Tegmark (2017) |
||
4/20 | Yuki Shiraito | Dropout as Bayesian approximation: Representing Model Uncertainty in Deep Learning Gal & Ghahramani (2016) | ||
4/13 | Mikio Aoi | A Probabilistic Theory of Deep Learning Ankit B. Patel, Tan Nguyen, Richard G. Baraniuk (2015) | ||
3/30 | Brian DePasquale | Semi-supervised Learning with Deep Generative Models Diederik P. Kingma, Danilo J. Rezende, Shakir Mohamed, Max Welling | ||
3/16 | Adam Charles | On the expressive power of deep learning: A tensor analysis Cohen, Sharir, Shashua (2016) | ||
3/9 | Mikio Aoi | Auto-Encoding Variational Bayes Kingma, Welling (2014) | ||
3/2 | Bianca Dumitrascu | Stochastic Backpropagation and Approximate Inference in Deep Generative Models Danilo Jimenez Rezende, Shakir Mohamed, Daan Wierstra (2014) | ||
2/16 | Nick Roy | Understanding deep learning requires rethinking generalization Zhang, Bengio, Hardt, Recht, Vinyals (2017) |