CSML Seminar Series

  • Exploration by Optimization in Partial Monitoring

    Tue, Nov 12, 2019, 4:30 pm

    In many real-world problems learners cannot directly observe their own rewards but can still infer whether some particular action is successful. How should a learner take actions to balance its need of information while maximizing their reward in this setting? Partial monitoring is a framework introduced a few decades ago to model learning situations of this kind. The simplest version of partial monitoring generalizes learning with full-, bandit-, or even graph-information based feedback.

  • Optimizing for Fairness in ML

    Thu, Nov 7, 2019, 4:30 pm
    Recent events have made evident the fact that algorithms can be discriminatory, reinforce human prejudices, accelerate the spread of misinformation, and are generally not as objective as they are widely thought to be.
  • Word Embeddings: What works, what doesn’t, and how to tell the difference for applied research

    Fri, Oct 25, 2019, 12:00 pm
    We consider the properties and performance of word embeddings techniques in the context of political science research. In particular, we explore key parameter choices—including context window length, embedding vector dimensions and the use of pre-trained vs locally fit variants—with respect to efficiency and quality of inferences possible with these models. Reassuringly we show that results are generally robust to such choices for political corpora of various sizes and in various languages.
  • Recent Advances in Non-Convex Distributed Optimization and Learning

    Mon, Nov 18, 2019, 4:30 pm
    We consider a class of distributed non-convex optimization problems, in which a number of agents are connected by a communication network, and they collectively optimize a sum of (possibly non-convex and non-smooth) local objective functions. This type of problem has gained some recent popularities, especially in the application of distributed training of deep neural networks.
  • Can learning theory resist deep learning?

    Fri, Nov 15, 2019, 12:30 pm
    Machine learning algorithms are ubiquitous in most scientific, industrial and personal domains, with many successful applications. As a scientific field, machine learning has always been characterized by the constant exchanges between theory and practice, with a stream of algorithms that exhibit both good empirical performance on real-world problems and some form of theoretical guarantees.
  • Convergence Rates of Stochastic Algorithms in Nonsmooth Nonconvex Optimization

    Thu, Nov 14, 2019, 4:30 pm
    Stochastic iterative methods lie at the core of large-scale optimization and its modern applications to data science. Though such algorithms are routinely and successfully used in practice on highly irregular problems (e.g. deep neural networks), few performance guarantees are available outside of smooth or convex settings. In this talk, I will describe a framework for designing and analyzing stochastic methods on a large class of nonsmooth and nonconvex problems, with provable efficiency guarantees.
  • Provable Reinforcement Learning From Small Data

    Fri, Oct 4, 2019, 4:30 pm
    Recent years have witnessed increasing empirical successes in reinforcement learning (RL). However, many theoretical questions about RL were not well understood. For example, how many observations are necessary and sufficient for learning a good policy? How to learn to control using structural information with provable regret? In this talk, we discuss the statistical efficiency of RL, with and without structural information such as linear feature representation, and show how to algorithmically learn the optimal policy with nearly minimax-optimal complexity. 
  • Meisam Razaviyayn

    Thu, Oct 17, 2019, 4:30 pm
    Recent applications that arise in machine learning have surged significant interest in solving min-max saddle point games. This problem has been extensively studied in the convex-concave regime for which a global equilibrium solution can be computed efficiently. In this talk, we study the problem in the non-convex regime and show that an $\epsilon$--first order stationary point of the game can be computed  when one of the player’s objective can be optimized to global optimality efficiently.  We discuss the application of the proposed algorithm in defense against adversarial attacks to neural networks, generative adversarial networks, fair learning, and generative adversarial imitation learning.
  • Prediction with Confidence – General Framework for Predictive Inference

    Fri, Sep 27, 2019, 12:00 pm
    We propose a general framework for prediction in which a prediction is in the form of a distribution function, called ‘predictive distribution function’. This predictive distribution function is well suited for prescribing the notion of confidence under the frequentist interpretation and  providing meaningful answers for prediction-related questions. Its very form of a distribution function also lends itself as a useful tool for quantifying uncertainty in prediction.

Pages

Subscribe to CSML Seminar Series