CSML Reading Group

 You've reached the website for the Princeton CSML reading group.  We are a group of students, postdocs, and faculty members interested in learning more about the latest ideas in data analysis.  In the links above you can find the current schedule and info on previous meetings (coming soon).  

If you have further questions feel free to contact Gregory Gundersen and Alex Beatson

 


Meetings

Time:  Thursdays at 2:30pm

Location: 26 Prospect Ave Classroom (room 105)


Join

To subscribe to our listserv and receive weekly email reminders about meetings:

Send an email to listserv [at] princeton.edu from the email you wish to have subscribed.
In the body of the email, write: SUB csml-reading FirstName LastName

The subject should be blank. For more information, please see: helpdesk.princeton.edu

 


Recent Meetings

Date Presenter Topic Reading
03/01 Sidu Jena Information entropy and max entropy methods TBD
02/08 Diana Cai, Bianca Dumitrascu Natural gradients, mirror descent and stochastic variational inference
02/08 Alex Beatson, Greg Gundersen Intro: information geometry, f-divergences, the Fisher metric, and the exponential family
01/25 Sidu Jena, Archit Verma Differential geometry overview
11/16 Archit Verma Robust Control
11/09 Ari Seff Optimal Control
11/02 Sidu Jena, Max Wilson Control Theory Basics

10/26    

Alex Beatson Actor-Critic

10/19     

Niranjani Prasad, Gregory Gundersen      Q-Learning
10/12      Ryan Adams    Policy Gradient Methods


Past Meetings


Current Semester (Fall 2017)

Date Presenter Reading

6/22     

     David Zoltowski, Mikio Aoi           Stochastic Gradient Descent as Approximate Bayesian Inference  Mandt, Hoffman, Blei (2017)

6/1      

Stephen Keeley Understanding deep convolutional networks Mallat (2016)

5/25      

Davit Zoltowski       Variational Inference with Normalizing Flows Rezende, Mohamed (2016)

5/11 

Jordan Ash Generative Adversarial Nets Ian J. Goodfellow∗ , Jean Pouget-Abadie† , Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair‡ , Aaron Courville, Yoshua Bengio§ 

5/4  

Greg Darnell Convolutional Neural Networks Analyzed via Convolutional Sparse Coding Vardan Papyan, Yaniv Romano, Michael Elad (2016)

4/27    

Bianca Dumitrascu

Why does deep learning work so well? Henry W. Lin, Max Tegmark (2017)

4/20      Yuki Shiraito Dropout as Bayesian approximation: Representing Model Uncertainty in Deep Learning Gal & Ghahramani (2016)
4/13 Mikio Aoi A Probabilistic Theory of Deep Learning Ankit B. Patel, Tan Nguyen, Richard G. Baraniuk (2015)
3/30 Brian DePasquale Semi-supervised Learning with Deep Generative Models Diederik P. Kingma, Danilo J. Rezende, Shakir Mohamed, Max Welling
3/16 Adam Charles On the expressive power of deep learning: A tensor analysis Cohen, Sharir, Shashua (2016)
3/9 Mikio Aoi Auto-Encoding Variational Bayes Kingma, Welling (2014)
3/2 Bianca Dumitrascu Stochastic Backpropagation and Approximate Inference in Deep Generative Models Danilo Jimenez Rezende, Shakir Mohamed, Daan Wierstra (2014)
2/16 Nick Roy Understanding deep learning requires rethinking generalization Zhang, Bengio, Hardt, Recht, Vinyals (2017)