User Tools

Site Tools

Writing /home/users/ashutosn/public_html/CommNetS2016/dokuwiki/data/cache/b/b37c266b8b7198eac6257792c41b65a1.i failed
Unable to save cache file. Hint: disk full; file permissions; safe_mode setting.
Writing /home/users/ashutosn/public_html/CommNetS2016/dokuwiki/data/cache/b/b37c266b8b7198eac6257792c41b65a1.metadata failed
Writing /home/users/ashutosn/public_html/CommNetS2016/dokuwiki/data/cache/b/b37c266b8b7198eac6257792c41b65a1.i failed
Unable to save cache file. Hint: disk full; file permissions; safe_mode setting.
Writing /home/users/ashutosn/public_html/CommNetS2016/dokuwiki/data/cache/b/b37c266b8b7198eac6257792c41b65a1.xhtml failed


Nowadays, large-scale systems are ubiquitous. Some examples/applications include wireless communication networks; electricity grid, sensor, and cloud networks; and machine learning and signal processing applications, just to name a few. The analysis and design of distributed parallel algorithms in this context is a challenging task. In this talk we will present our ongoing work in this area.

In the first part of the talk we will introduce a novel and flexible decomposition framework for the distributed optimization of general (stochastic) nonconvex sum-utility functions, subject to private and/or coupling (nonconvex) constraints. Our scheme is very general and flexible, and enjoys many desirable features, such as: i) it includes both fully parallel (i.e., Jacobi) and sequential (i.e., Gauss-Seidel) schemes, as well as virtually all possibilities “in between”, all converging under the same conditions; ii) it unifies and generalizes several existing successive convex approximation-based algorithms as well as their convergence properties, including (proximal) gradient or Newton type methods, block coordinate (parallel) descent schemes, difference of convex functions methods; and iii) it can be easily particularized to well-known applications, giving rise to very efficient practical distributed algorithms that outperform existing ad-hoc methods proposed for very specific problems.

The second part of the talk is devoted to the application of the proposed framework to a variety of problems in different areas, including (time permitting): i) design of different wireless multi-user interfering systems; ii) demand-side management of smart grids; iii) distributed computational offloading in mobile cloud computing ; iv) optimization problems in machine learning (e.g.. LASSO and Logistic regression); and v) MRI reconstruction.


Gesualdo Scutari (S’05–M’06–SM’11) received the Electrical Engineering and Ph.D. degrees (both with honors) from the University of Rome “La Sapienza”, Rome, Italy, in 2001 and 2005, respectively. He is an Assistant Professor with the Department of Electrical Engineering at State University of New York (SUNY) at Buffalo, Buffalo, NY.  He had previously held several research appointments, namely, at University of California at Berkeley, Berkeley, CA; Hong Kong University of Science and Technology, Hong Kong; University of Rome, “La Sapienza”, Rome, Italy; University of Illinois at Urbana-Champaign, Urbana, IL. His primary research interests focus on theoretical and algorithmic issues related to continuous (large-scale) optimization, equilibrium programming, and their applications to signal processing, communications and networking; medical imaging; machine learning; smart grid; and distributed decisions. Dr. Scutari is an Associate Editor of IEEE Transactions on Signal Processing, and he served as an Associate Editor of IEEE Signal Processing Letters. He serves on the IEEE Signal Processing Society Technical Committee on Signal Processing for Communications (SPCOM). Dr. Scutari received the 2006 Best Student Paper Award at the International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2006, the 2013 NSF Faculty Early Career Development (CAREER) Award, and the 2013 UB Young Investigator Award.

parallel_and_distributed_optimization_of_large_scale_systems.txt · Last modified: 2016/09/01 19:15 (external edit)