Random feature methods have been successful in various machine learning tasks, are easy to compute, and come with theoretical accuracy bounds. They serve as an alternative approach to standard neural networks since they can represent similar function spaces without a costly training phase. However, for accuracy, random feature methods require more measurements than trainable parameters, limiting their use for data-scarce applications or problems in scientific machine learning. This paper introduces the sparse random feature method that learns parsimonious random feature models utilizing techniques from compressive sensing. We provide uniform bounds on the approximation error for functions in a reproducing kernel Hilbert space depending on the number of samples and the distribution of features. The error bounds improve with additional structural conditions, such as coordinate sparsity, compact clusters of the spectrum, or rapid spectral decay. We show that the sparse random feature method outperforms shallow networks for well-structured functions and applications to scientific machine learning tasks.
Rachel Ward: Research interests: applied harmonic analysis, probability, optimization, mathematical data science.
Here is a short bio, and here is a CV (updated June 2020)
My papers can be found on my Google Scholar profile
I am thankful for my students and postdocs. If you are a UT student interested in working with me, please first read through my recent papers to make sure the topics and level of theory are a good fit; if so, come with a list a questions to ask me. We currently have an open position for a postdoc to work on randomized methods for solving linear systems.
I am fortunate to be on the editorial board of Information and Inference, SIAM Journal on Mathematics of Data Science, and to be a chair of the inaugural Mathematical and Scientific Machine Learning (MSML) conference.