One central challenge in neuroscience is to understand how neural populations represent and produce the remarkable computational abilities of our brains. Indeed, neuroscientists increasingly form scientific hypotheses that can only be studied at the level of the neural population, and exciting new large-scale datasets have followed. Capitalizing on this trend, however, requires two major efforts from applied statistical and machine learning researchers: (i) methods for finding latent structure in this data, and (ii) methods for statistically validating that structure. First, I will discuss our machine learning research that combines latent variable modeling, deep learning, dynamical systems, and dimensionality reduction, and I will discuss how we have applied those models to advance understanding of the computational structure in various neural systems, including in particular the primate and rodent motor cortices. Second, I will detail a problem of growing importance throughout unsupervised learning: how to understand when these analysis techniques artificially create structure, rather than that structure being a true feature of the data. I will review our recent work in this space, which uses deep neural network architectures in the flavor of implicit generative models, and describe our current application of these methods to a number of active debates in the neuroscience community about the triviality of certain results.
John P. Cunningham is an associate professor in the Department of Statistics at Columbia University. He received a B.A. in computer science from Dartmouth College, and a M.S. and Ph.D. in electrical engineering from Stanford University, and he completed postdoctoral work in the Machine Learning Group at the University of Cambridge. His research group at Columbia investigates several areas of machine learning and statistical neuroscience.