BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Date iCal//NONSGML kigkonsult.se iCalcreator 2.20.2//
METHOD:PUBLISH
X-WR-CALNAME;VALUE=TEXT:Center for Statistics and Machine Learning
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:STANDARD
DTSTART:20201101T020000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RDATE:20211107T020000
TZNAME:EST
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20210314T020000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:calendar.3701.field_events_date.0@csml.princeton.edu
DTSTAMP:20210416T233207Z
CATEGORIES:CSML Seminar Series\, Featured Event
CREATED:20210204T202700Z
DESCRIPTION:Speaker: Yuejie Chi\, Carnegie Mellon University\n Title: Precon
ditioning Helps: Faster Convergence in Statistical and Reinforcement Learn
ing \n Day: Monday\, April 19\, 2021\n Time: 4:30 pm\n Zoom Link: Please regi
ster using this link\n Host: Yuxin Chen and Jason Lee \n\nAbstract:\n While
exciting progress has been made in understanding the global convergence of
vanilla gradient methods for solving challenging nonconvex problems in st
atistical estimation and machine learning\, their computational efficacy i
s still far from satisfactory for ill-posed or ill-conditioned problems. I
n this talk\, we discuss how the trick of preconditioning further boosts t
he convergence speed with minimal computation overheads through two exampl
es: low-rank matrix estimation in statistical learning and policy optimiza
tion in entropy-regularized reinforcement learning. For low-rank matrix es
timation\, we present a new algorithm\, called scaled gradient descent\, t
hat achieves linear convergence at a rate independent of the condition num
ber of the low-rank matrix at near-optimal sample complexities for a varie
ty of tasks\, even in the presence of adversarial outliers. For policy opt
imization\, we develop the first fast non-asymptotic convergence guarantee
for entropy-regularized natural policy gradient methods in the tabular se
tting for discounted Markov decision processes. By establishing its global
linear convergence at a near dimension-free rate\, we provide theoretical
footings to the empirical success of entropy-regularized natural policy g
radient methods.\n \n\nBio:\n Dr. Yuejie Chi is an Associate Professor in t
he department of Electrical and Computer Engineering\, and a faculty affil
iate with the Machine Learning department and CyLab at Carnegie Mellon Uni
versity\, where she holds the Robert E. Doherty Early Career Development P
rofessorship. She received her Ph.D. and M.A. from Princeton University\,
and B. Eng. (Hon.) from Tsinghua University\, all in Electrical Engineerin
g. Her research interests lie in the theoretical and algorithmic foundatio
ns of data science\, signal processing\, machine learning and inverse prob
lems\, with applications in sensing systems\, broadly defined. Among other
s\, Dr. Chi received the Presidential Early Career Award for Scientists an
d Engineers (PECASE)\, the inaugural IEEE Signal Processing Society Early
Career Technical Achievement Award for contributions to high-dimensional s
tructured signal processing\, and was named a Goldsmith Lecturer by IEEE I
nformation Theory Society.\n\nThis seminar is supported by CSML and EE Kor
hammer Lecture Series Funds
DTSTART;TZID=America/New_York:20210419T163000
DTEND;TZID=America/New_York:20210419T163000
LAST-MODIFIED:20210204T203725Z
LOCATION:Virtual Seminar
SUMMARY:Preconditioning Helps: Faster Convergence in Statistical and Reinfo
rcement Learning
URL;TYPE=URI:https://csml.princeton.edu/events/preconditioning-helps-faster
-convergence-statistical-and-reinforcement-learning
END:VEVENT
BEGIN:VEVENT
UID:calendar.3766.field_events_date.2@csml.princeton.edu
DTSTAMP:20210416T233207Z
CREATED:20210309T201549Z
DESCRIPTION:The goal of the recommender systems (RS) reading group is to ga
in deeper understanding both of seminal work as well as emerging ideas in
the field. Papers will include research on RS algorithm development and ev
aluation\; user-centered design and user studies for RS\; fairness\, accou
ntability\, and explainability in recommendations\; and societal impacts o
f RS.\n\nThe group will meet biweekly for one hour to discuss a selected p
aper. A rotating discussion leader will engage the group in whatever forma
t they feel is best for the paper (e.g.\, presentation\, guided discussion
\, free form discussion).\n\nUpcoming papers include:\n\n\n Zhao\, Q.\, Har
per\, F. M.\, Adomavicius\, G.\, & Konstan\, J. A. (2018\, April). Explici
t or implicit feedback? Engagement or satisfaction? A field experiment on
machine-learning-based recommender systems. In Proceedings of the 33rd Ann
ual ACM Symposium on Applied Computing (pp. 1331-1340).\n Hu\, Y.\, Koren\,
Y.\, & Volinsky\, C. (2008\, December). Collaborative filtering for impli
cit feedback datasets. In 2008 Eighth IEEE International Conference on Dat
a Mining (pp. 263-272). Ieee.\n Dacrema\, M. F.\, Cremonesi\, P.\, & Jannac
h\, D. (2019\, September). Are we really making much progress? A worrying
analysis of recent neural recommendation approaches. In Proceedings of the
13th ACM Conference on Recommender Systems (pp. 101-109).\n Knijnenburg\,
B. P.\, Bostandjiev\, S.\, O’Donovan\, J.\, & Kobsa\, A. (2012\, September
). Inspectability and control in social recommenders. In Proceedings of th
e sixth ACM conference on Recommender systems (pp. 43-50).\n Jannach\, D.\,
& Adomavicius\, G. (2016\, September). Recommendations with a purpose. In
Proceedings of the 10th ACM conference on recommender systems (pp. 7-10).
\n Cremonesi\, P.\, Koren\, Y.\, & Turrin\, R. (2010\, September). Performa
nce of recommender algorithms on top-n recommendation tasks. In Proceeding
s of the fourth ACM conference on Recommender systems (pp. 39-46).\n\n\nFo
r additional information please contact Amy Winecoff at aw0934@princeton.e
du.
DTSTART;TZID=America/New_York:20210427T160000
DTEND;TZID=America/New_York:20210427T160000
LAST-MODIFIED:20210309T201549Z
RRULE:FREQ=WEEKLY;COUNT=3;INTERVAL=2;WKST=MO
SUMMARY:CITP Reading Group: Recommender Systems (RS)
URL;TYPE=URI:https://csml.princeton.edu/events/citp-reading-group-recommend
er-systems-rs
END:VEVENT
END:VCALENDAR