User Tools

Site Tools

Writing /home/users/ashutosn/public_html/CommNetS2016/dokuwiki/data/cache/f/f1941838a2b0ecdce27b504124b163cc.i failed
Unable to save cache file. Hint: disk full; file permissions; safe_mode setting.
Writing /home/users/ashutosn/public_html/CommNetS2016/dokuwiki/data/cache/f/f1941838a2b0ecdce27b504124b163cc.metadata failed
Writing /home/users/ashutosn/public_html/CommNetS2016/dokuwiki/data/cache/f/f1941838a2b0ecdce27b504124b163cc.i failed
Unable to save cache file. Hint: disk full; file permissions; safe_mode setting.
Writing /home/users/ashutosn/public_html/CommNetS2016/dokuwiki/data/cache/f/f1941838a2b0ecdce27b504124b163cc.xhtml failed

“Learning, Incentives and Optimization for Human-Energy System Interaction”

Prof. Insoon Yang, USC

Wednesday, September 21, 2016 2:00 - 3:00PM EEB 248

Abstract: With the advances in Cyber-Physical Systems (CPS) and the Internet of Things (IoT) technologies, sensor and communication networks and computing elements are pervasive in many modern infrastructures that affect our daily lives. However, sustainable interactions between human users and CPS or IoT are not guaranteed unless there is an appropriate coordination mechanism for them. Specifically, on one hand we can customize the operation of these systems by learning user behaviors and preferences. On the other hand, we can incentivize human users to cooperate for the system operation. Such feedback loops between human users and CPS can improve large-scale critical infrastructure systems with suitable optimization techniques. In this talk, I will present learning, incentive, and optimization tools that support interactions between human users and modern energy systems, which is an important class of CPS- and IoT-enabled infrastructure systems. The first tool, called the utility learning model predictive control, provides a way to learn quasi-periodic user behaviors and preferences using Gaussian processes to optimize the operation of personal electric loads such as HVAC systems and Electric Vehicles. Second, I will talk about contracts that can incentivize customers to provide useful services to the power grids with the aid of automated demand response technology that automatically controls the customers' loads. In the last part of this talk, we will discuss resource allocation problems in power networks associated with these CPS- and IoT-based technologies as well as customer targeting to maximize the social welfare and identify the submodularity structure that justifies the use of greedy algorithms providing (1-1/e)-optimal solutions.

Bio: Insoon Yang is an Assistant Professor of Electrical Engineering at USC. He received B.S. degrees in Mathematics and in Mechanical Engineering (summa cum laude) from Seoul National University in 2009; and an M.S. in EECS, an M.A. in Mathematics and a Ph.D. in EECS from UC Berkeley in 2012, 2013 and 2015, respectively. Before joining USC, he was a Postdoctoral Associate at the Laboratory for Information and Decision Systems at MIT. Insoon's research interests are in stochastic control, optimization in systems and control, and energy and power systems. He currently focuses on control methods, risk management solutions and incentive mechanisms that support interactions between human users and CPS- or IoT-enabled systems with limited information. He is a recipient of the 2015 Eli Jury Award.

learning_incentives_and_optimization_for_human-energy_system_interaction.txt · Last modified: 2016/09/29 14:20 by ashutosh_nayyar