Sharp tradeoffs for randomized numerical algorithms: Let the theory meet practice
Randomized numerical algorithms are fundamental for a variety of problems in signal processing and machine learning. Examples include sparse signal processing and dimensionality reduction for faster machine learning. These algorithms come with various tradeoffs involving the amount of data, computational resources and statistical precision. Characterization of these tradeoffs is crucial for correct hyperparameter selection, time sensitive optimization and eventual performance of the algorithms. In this talk, we describe our recent results on how to accurately predict these tradeoffs in multiple scenarios which helps us further close the gap between theory and practice.
Samet Oymak is a software engineer at Google. Prior to that he was a fellow at Simons Institute and a postdoctoral scholar in the AMPLab at UC Berkeley. He received his BS from Bilkent University in 2009 and his MS and PhD from Caltech in 2014, all in electrical engineering. At Caltech, he was advised by Babak Hassibi and won the departmental best thesis award.