Eilyan Bitar, Cornell University

 

Title: Transitioning to a Modern Power System: the Role of Control, Optimization, and Human Behavior

 

Abstract: Renewable energy resources are at the heart of the drive to modernize the electric power system. A fundamental obstacle is that the supply of power from renewables is inherently random. It is largely uncontrollable, highly intermittent, and difficult to forecast. Accommodating this variability at scale will require a paradigm shift in how we evolve and manage the power system. In this short lecture, we will offer a system theoretic perspective of several critical challenges facing the deep integration of renewables. In particular, we will discuss the emergence of several interesting problems -- at the intersection of control, optimization, and game theory -- whose solution are essential to realizing this transition to a modern power system.

 

Biography: Eilyan Bitar is currently an Assistant Professor in the School of Electrical and Computer Engineering at Cornell University. Prior to joining Cornell in the Fall 2012, he was engaged as a Postdoctoral Fellow in the department of Computing + Mathematical Science (CMS) at the California Institute of Technology and at the University of California, Berkeley in Electrical Engineering and Computer Science during the 2011-12 academic year. A native Californian, he received both his Ph.D. (2011) and B.S. (2006) from the University of California, Berkeley. Professor Bitar's research interests include modern power systems and electricity markets, stochastic control, and optimization.

Tara Javidi, University of California, San Diego

 

Title: Social Learning and Hypothesis Testing

 

Abstract: Individual nodes in a network receive noisy local (private) observations whose distribution is parameterized by a discrete parameter (hypotheses). The conditional distributions are known locally at the nodes, but the true parameter/hypothesis is not known. An update rule is analyzed in which nodes first perform a Bayesian update of their belief (distribution estimate) of the parameter based on their local observation, communicate these updates to their neighbors and then perform a non-Bayesian linear consensus using the log-beliefs of their neighbors. Under mild assumptions, we show that belief of any node in any incorrect parameter converges to zero exponentially fast and the exponential rate of learning is a characterized by the network structure and the divergences between the observations' distributions. Furthermore, we provide a concentration result for the rate of learning by providing the probability of being with an error margin of the asymptotic rate as a function of time, periodicity of the network, error margin and observation model.

 

Biography: Tara Javidi studied electrical engineering at Sharif University of Technology, Tehran, Iran from 1992 to 1996. She received her MS degrees in electrical engineering (systems) and in applied mathematics (stochastics) from the University of Michigan, Ann Arbor, in 1998 and 1999, respectively. She received a Ph.D. in electrical engineering and computer science from the University of Michigan, Ann Arbor, in 2002. From 2002 to 2004, she was an assistant professor at the Electrical Engineering Department, University of Washington, Seattle. In 2005, she joined University of California, San Diego, where she is currently an associate professor of electrical and computer engineering. During 2013-2014, she was a visiting faculty at Stanford University where she spent her sabbatical. Tara Javidi was a Barbour Scholar during 1999-2000 academic year and received an NSF CAREER Award in 2004. Her research interests are in communication networks, stochastic resource allocation, stochastic control theory, and wireless communications.

Mihailo R. Jovanovic, University of Minnesota

 

Title: Dynamics and Control of Distributed Systems: Lessons, Opportunities, and Challenges

 

Abstract: Networks of dynamical systems often display complex dynamical responses that cannot be predicted by analyzing subsystems in isolation. Understanding these responses and quantifying fundamental performance limitations in the presence of structural constrains on distributed controllers are important challenges for both analysis and design. I will briefly summarize fundamental limitations arising from the use of local feedback in networks subject to stochastic disturbances. I will then demonstrate how tools and ideas from control theory, optimization, and compressive sensing can be combined to identify network topologies that strike desired tradeoff between the network performance and sparsity. Finally, I will highlight how working across different disciplines inspires theoretical developments that bring valuable insight about emerging applications and provide new methods for uncertainty quantification, analysis, and design of distributed systems.

 

Biography: Mihailo R. Jovanovic was born in Arandjelovac, Serbia. He received the Dipl. Ing. and M.S. degrees from the University of Belgrade, Serbia, in 1995 and 1998, respectively, and the Ph.D. degree from the University of California, Santa Barbara, in 2004, under the direction of Bassam Bamieh. Before joining the University of Minnesota, Minneapolis, he was a Visiting Researcher with the Department of Mechanics, the Royal Institute of Technology, Stockholm, Sweden, from September to December 2004. Currently, he is an Associate Professor of Electrical and Computer Engineering at the University of Minnesota, Minneapolis, where he also serves as the Director of Graduate Studies in the interdisciplinary Ph.D. program in Control Science and Dynamical Systems. He has held visiting positions with Stanford University and the Institute for Mathematics and its Applications. Professor Jovanovic's expertise is in modeling, dynamics, and control of large-scale and distributed systems and his current research focuses on sparsity-promoting optimal control, dynamics and control of fluid flows, and fundamental limitations in the design of large dynamic networks. He is a senior member of IEEE, and a member of APS and SIAM. He currently serves as an Associate Editor of the SIAM Journal on Control and Optimization and has served as an Associate Editor of the IEEE Control Systems Society Conference Editorial Board from July 2006 until December 2010. He received a CAREER Award from the National Science Foundation in 2007, an Early Career Award from the University of Minnesota Initiative for Renewable Energy and the Environment in 2010, a Resident Fellowship within the Institute on the Environment at the University of Minnesota in 2012, the George S. Axelby Outstanding Paper Award from the IEEE Control Systems Society in 2013, the University of Minnesota Informatics Institute Transdisciplinary Research Fellowship in 2014, and the Distinguished Alumni Award from the Department of Mechanical Engineering at UC Santa Barbara in 2014. Papers of his students were finalists for the Best Student Paper Award at the American Control Conference in 2007 and 2014.

Javad Lavaei, Columbia University

 

Title: Graph-Theoretic Algorithm for Arbitrary Polynomial Optimization Problems with Applications to Distributed Control, Power Systems, and Matrix Completion

 

Abstract: Optimization theory plays a crucial role in the design, analysis, control, and operation of real-world systems. The development of efficient optimization techniques and numerical algorithms for nonlinear optimization problems has been an active area of research for many years. The goal is to design an efficient, robust and scalable method for finding a global or near-global solution. This still remains as an open problem for the general class of polynomial optimization, including combinatorial optimization. In this talk, we study a general polynomial optimization using a semidefinite programming (SDP) relaxation. The existence of a rank-1 matrix solution to the SDP relaxation enables the recovery of a global solution of the original problem. We propose a graph-theoretic technique to sparsify the optimization problem of interest such that its SDP relaxation will have a guaranteed low-rank solution. As a by-product, we show that every arbitrary polynomial optimization admits a sparse quadratic representation whose SDP relaxation has a matrix solution with rank at most 3. This result provides a basis for finding a near-global solution by approximating the rank-3 SDP solution with a rank-1 matrix. In this talk, we apply our technique to three long-standing problems: optimal distributed control, power optimization problem, and low-rank matrix completion.

 

Biography: Javad Lavaei joined Columbia University’s Electrical Engineering Department in 2012. He received the B.Sc. degree in electronics engineering from Sharif University of Technology in 2003, the M.A.Sc. degree in electrical engineering from Concordia University in 2007, and the Ph.D. degree in Control & Dynamical Systems from California Institute of Technology in 2011. Prior to joining Columbia, he spent one year as a Postdoctoral Scholar in Electrical Engineering and Precourt Institute for Energy at Stanford University. He is interested in control theory, power systems, optimization theory, and networking. Javad Lavaei is the recipient of the Milton and Francis Clauser Doctoral Prize for the best campus-wide Ph.D. thesis, entitled “Large-Scale Complex Systems: From Antenna Circuits to Power Grids”. He is a senior member of IEEE and has won several awards, including the Canadian Governor General’s Gold Medal, Northeastern Association of Graduate Schools Master’s Thesis Award, New Face of Engineering, and Silver Medal in the International Mathematical Olympiad.

Aditya Mahajan, McGill University

 

Title: Team Optimal Decentralized Control of Mean-field Coupled Subsystems

 

Abstract: Motivated by the application of demand response for integrating renewable generation in traditional power systems, we investigate team optimal decentralized control of subsystems whose dynamics are weakly coupled through their mean-field. To ensure fairness and robustness, all subsystems must use identical control strategies. A consequence of this restriction is that permuting subsystems does not affect system performance. We exploit this symmetry to identify a dynamic programming decomposition that determines globally optimal decentralized control laws. In general, the solution complexity of team optimal decentralized stochastic control increases double exponentially with the number of controllers. In contrast, the solution complexity of the proposed approach for the above model increases polynomially with the number of controllers. We illustrate the approach with an example of demand response of 1000 coupled subsystems.

 

Biography: Aditya Mahajan is Assistant Professor of Electrical and Computer Engineering at McGill University, Montreal, Canada. He is a member of the McGill Center of Intelligent Machines (CIM) and Groupe d'études et de recherche en analyse des décisions (GERAD). He is a senior member of the IEEE and a member of SIAM. He received the B.Tech degree in Electrical Engineering from the Indian Institute of Technology, Kanpur, India in 2003 and the MS and PhD degrees in Electrical Engineering and Computer Science from the University of Michigan, Ann Arbor, USA in 2006 and 2008. From 2008 to 2010, he was postdoctoral researcher in the department of Electrical Engineering at Yale University, New Haven, CT, USA. His principal research interests include decentralized stochastic control, team theory, multi-armed bandits, real-time communication, information theory, and discrete event systems.

Jason Marden, University of Colorado at Boulder

 

Title: Selecting Efficient Correlated Equilibria Through Distributed Learning

 

Abstract: The vast majority of distributed learning algorithms focus on convergence to Nash equilibria. Correlated equilibria, on the other hand, can often characterize more efficient collective behavior than even the best Nash equilibrium. However, there are no existing distributed learning algorithms that converge to specific correlated equilibria. In this talk, we provide one such algorithm which guarantees that the agents’ collective joint strategy will constitute an efficient correlated equilibrium with high probability in any normal form game with generic payoff. The key to attaining efficient correlated behavior through distributed learning involves incorporating a common random signal into the learning environment.

 

Biography: Jason Marden is an Assistant Professor in the Department of Electrical, Computer, and Energy Engineering at the University of Colorado. He received a BS in Mechanical Engineering in 2001 from UCLA, and a PhD in Mechanical Engineering in 2007, also from UCLA, under the supervision of Jeff S. Shamma, where he was awarded the Outstanding Graduating PhD Student in Mechanical Engineering. After graduating from UCLA, he served as a junior fellow in the Social and Information Sciences Laboratory at the California Institute of Technology until 2010 when he joined the University of Colorado. In 2012, he received the Donald P. Eckman award and an AFOSR Young Investigator Award. His research interests focus on game theoretic methods for feedback control of distributed multiagent systems.

Michael Rotkowitz, University of Maryland

 

Title: Stabilization of Decentralized Systems with Arbitrary Information Structure

 

Abstract: We revisit the fundamental question of whether a linear time-invariant (LTI) system can be stabilized by an LTI decentralized controller. A seminal result in decentralized control is the development of fixed modes by Wang and Davison in 1973 - that plant modes which cannot be moved with a static decentralized controller cannot be moved by a dynamic one either, and that the other modes which can be moved can be shifted to any chosen location with arbitrary precision. These results were developed for perfectly decentralized, or block diagonal, information structure, where each control input may only depend on a single corresponding measurement. Furthermore, the results were claimed after a preliminary step was demonstrated, omitting a rigorous induction for each of these results, and the remaining task is nontrivial. We discuss recent work which considered fixed modes for arbitrary information structures, where certain control inputs may depend on some measurements but not others. This comprehensively demonstrated that modes which cannot be altered by a static controller with the given structure cannot be moved by a dynamic one either, and that the modes which can be altered by a static controller with the given structure can be moved by a dynamic one to any chosen location with arbitrary precision, thus generalizing and solidifying the seminal results. Together, this shows that a system can be stabilized by an LTI controller with the given structure iff all of its modes which are fixed w.r.t. that structure are in the left half-plane. An algorithm for synthesizing such a stabilizing controller whenever possible is then distilled from the proof, and we briefly discuss some of its advantages and disadvantages. We will further discuss how, when a stronger stability criterion is desired, whereby the size of the state must always be decreasing, then stabilization as well as (when it exists) a stabilizing controller can be determined from a linear matrix inequality (LMI). Time permitting, we may briefly discuss issues which arise when time-varying or nonlinear controllers are also considered, and/or how this work can be used in conjunction with other recent work to further find optimal (stabilizing) decentralized controllers. This is joint work with Alborz Alavian.

 

Biography: Professor Michael Rotkowitz received the B.S. degree in Mathematical and Computational Science (with Honors and with Distinction) from Stanford University, Stanford, CA, in 1996. He then worked for J.P. Morgan Investment Management, New York, until 1998. He returned to Stanford and received the Ph.D. degree in Aeronautics and Astronautics in 2005. During that time, he also received the M.S. degree in Aeronautics and Astronautics and the M.S. degree in Statistics, and worked for NASA Ames Research Center. Dr. Rotkowitz was the Postdoctoral Fellow in Networked Embedded Control in the School of Electrical Engineering at the Royal Institute of Technology (KTH), Stockholm, Sweden from 2005-6, and a Research Fellow in the Department of Information Engineering at the Australian National University in Canberra, Australia from 2006-8. He then joined the University of Melbourne where he held the positions of Queen Elizabeth II Fellow and Future Generation Fellow in the Department of Electrical and Electronic Engineering, as well as Honorary Fellow in the Department of Mathematics and Statistics.

Henrik Sandberg, KTH Royal Institute of Technology

 

Title: Implementation Costs and Information Flow in Kalman-Bucy Filters

 

Abstract: In this talk, we discuss fundamental limits for physical implementations of the Kalman-Bucy filter for linear passive systems. In particular, we show that the Kalman-Bucy filter itself is a passive system and by invoking the second law of thermodynamics, we can characterize the external power supply needed to generate the optimal state estimate. We also show how the required external power supply can be decreased by allowing the filter to perturb the measured system to a larger extent. Hence, it is possible to decrease the so-called back action of the filter by spending more energy. By computing the information flow into the filter, we can also relate our result to the so-called Landauer’s principle, which puts a lower theoretical limit on the energy consumption of memory erasure.

 

Biography: Henrik Sandberg received the M.Sc. degree in engineering physics and the Ph.D. degree in automatic control from Lund University, Lund, Sweden, in 1999 and 2004, respectively. He is an Associate Professor with the Department of Automatic Control, KTH Royal Institute of Technology, Stockholm, Sweden. From 2005 to 2007, he was a Post-Doctoral Scholar with the California Institute of Technology, Pasadena, USA. In 2013, he was a visiting scholar at the Laboratory for Information and Decision Systems (LIDS) at MIT, Cambridge, USA. He has also held visiting appointments with the Australian National University and the University of Melbourne, Australia. His current research interests include secure networked control, power systems, model reduction, and fundamental limitations in control. Dr. Sandberg was a recipient of the Best Student Paper Award from the IEEE Conference on Decision and Control in 2004 and an Ingvar Carlsson Award from the Swedish Foundation for Strategic Research in 2007. He is currently an Associate Editor of the IFAC Journal Automatica.

Danielle Tarraf, Johns Hopkins University

 

Title: Less is More: A New Paradigm for Control

 

Abstract: Society’s widespread reliance on embedded autonomy brings forth a new set of challenges for the control engineer. In this talk, I advocate for a new paradigm built around the use of finite alphabet sensing and actuation and finite memory control to meet these challenges. I survey our efforts in developing the theoretical foundations of this paradigm, highlighting the central role of information.

 

Biography: Danielle C. Tarraf is an Assistant Professor of Electrical and Computer Engineering at the Johns Hopkins University. She previously held postdoctoral positions in the Division of Control and Dynamical Systems at the California Institute of Technology (2007-2008) and in the Laboratory for Information and Decision Systems at the Massachusetts Institute of Technology (2006-2007). She received her B.E. degree from the American University of Beirut in 1996, and her M.S. and Ph.D. degrees from the Massachusetts Institute of Technology in 1998 and 2006, respectively. Dr. Tarraf is the recipient of a 2012 Johns Hopkins University Alumni Excellence in Teaching Award, a 2011 AFOSR Young Investigator Award and a 2010 NSF CAREER Award.

Serdar Yüksel, Queen’s University

 

Title: Asymptotic Optimality of Quantized Policies and Finite Approximations in Stochastic Control

 

Abstract: For Markov Decision Processes with uncountable state and action spaces, the computation of optimal policies are known to be prohibitively hard. In addition, networked control applications require remote controllers to transmit action commands to an actuator with low information rate. These two problems motivate the study of approximating optimal policies by quantized (discretized) policies. We consider the finite action approximation of stationary policies for a discrete-time Markov decision process with discounted and average costs under strong or weak continuity assumptions on the transition kernel. Discretized policies are shown to approximate optimal deterministic stationary policies with arbitrary precision. These results are also applicable to a fully observed reduction of a partially observed Markov decision process. Under stronger conditions on the transition kernels, rates of convergence are obtained. With further assumptions, we also obtain finite state and action approximations of discrete time Markov decision processes with discounted and average costs. Stationary policies obtained from finite state approximations of the original model are shown to approximate the optimal stationary policy with arbitrary precision under mild technical conditions. (Joint work with Naci Saldi and Tamas Linder).

 

Biography: Serdar Yüksel received his B.Sc. degree in Electrical and Electronics Engineering from Bilkent University in 2001; M.S. and Ph.D. degrees in Electrical and Computer Engineering from the University of Illinois at Urbana-Champaign in 2003 and 2006, respectively. He was a post-doctoral researcher at Yale University for a year before joining Queen's University as an Assistant Professor of Mathematics and Engineering in the Department of Mathematics and Statistics, where he is now an Associate Professor. He has been awarded the 2013 CAIMS/PIMS Early Career Award in Applied Mathematics. His research interests are on stochastic and decentralized control, information theory and applied probability. He is an associate editor of the IEEE Transactions on Automatic Control.