This shows you the differences between two versions of the page.
adaptive_parallel_and_asynchronous_stochastic_optimization_algorithms [2016/09/01 19:15] (current)
|Line 1:||Line 1:|
|+||Abstract: In this talk, I will discuss some recent insights in stochastic optimization algorithms, focusing on new adaptive schemes that dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradient-based optimization. These ideas allow us to develop learning algorithms that are (in a sense) optimal for the data they actually receive. As a particular example of these schemes, we look at problems where the *data* is sparse, which is in a sense dual to the current understanding of high-dimensional statistical learning and optimization. We also show how these ideas can be leveraged in the design of parallel and asynchronous algorithms, providing experimental evidence to complement our theoretical results on several different learning and optimization tasks.|
|+||Biography: I am currently a PhD candidate in computer science at Berkeley, where I started in the fall of 2008. I work in the Statistical Artificial Intelligence Lab (SAIL) under the joint supervision of Mike Jordan and Martin Wainwright. I obtained my master's degree (MA) in statistics in Fall 2012. I was initially supported by an NDSEG fellowship, and until recently was supported by Facebook, who generously awarded me a Facebook Fellowship. Before this, I was an undergrad and a masters student at Stanford University working with Daphne Koller in her research group, DAGS. I also spend some time at Google Research (once upon a time I was also a software engineer there), where I had (and continue to have) the great fortune to work with Yoram Singer. |