User Tools

Site Tools

Writing /home/users/ashutosn/public_html/CommNetS2016/dokuwiki/data/cache/b/bdf35cf532e891d1aafb6cbf7c9ef166.i failed
Unable to save cache file. Hint: disk full; file permissions; safe_mode setting.
Writing /home/users/ashutosn/public_html/CommNetS2016/dokuwiki/data/cache/b/bdf35cf532e891d1aafb6cbf7c9ef166.metadata failed
Writing /home/users/ashutosn/public_html/CommNetS2016/dokuwiki/data/cache/4/4c47e099ad93cfdd49da21f4cec5aad6.i failed
Unable to save cache file. Hint: disk full; file permissions; safe_mode setting.
Writing /home/users/ashutosn/public_html/CommNetS2016/dokuwiki/data/cache/4/4c47e099ad93cfdd49da21f4cec5aad6.xhtml failed


This shows you the differences between two versions of the page.

Link to this comparison view

ambiguity_in_risk_preferences_in_optimization_and_control [2016/09/01 19:15] (current)
Line 1: Line 1:
 +Abstract: ​
 +We discuss the problem of ambiguity in risk preferences and its relation to stochastic dominance. ​ This presentation emphasizes the increasing concave stochastic order due to its close connection with risk aversion. ​ We begin the talk by reviewing earlier work on stochastic dominance constrained optimization. Then, we present new material on multivariate stochastic dominance constrained optimization. ​ Multivariate stochastic dominance is relevant when a decision maker faces multiple jointly distributed random prospects, as in many network management problems. ​ In line with earlier work, we discover that utility functions themselves are the Lagrange multipliers of our multivariate stochastic dominance constraints. ​ Next, we move to the dynamic setting and show how to place stochastic dominance constraints on the distribution of long-run average and discounted reward in Markov decision processes (MDPs). ​ It turns out that convex analytic methods are a totally natural tool for dominance constrained MDPs, and this class of MDPs can be solved with linear programming. ​ In parallel with the static case, utility functions appear in the dual linear programming problem of the dominance constrained MDP.  We conclude the presentation by arguing for the worth of a measure of regret based on stochastic dominance, and its applicability to both optimization and control. ​ This work has been done jointly with Rahul Jain, J. George Shanthikumar,​ and Z. Max Shen.
 +Bio: Will Haskell earned his B.S. in mathematics from the University of Massachusetts Amherst in 2006, and his Ph.D. in operations research from the University of California Berkeley in 2011.  He was advised by J. George Shanthikumar and Z. Max Shen.  His research interests consist of convex optimization,​ Markov decision processes, stochastic dominance, and risk management. ​ He is currently a visiting assistant professor in the Industrial and Systems Engineering Department at the University of Southern California.
ambiguity_in_risk_preferences_in_optimization_and_control.txt ยท Last modified: 2016/09/01 19:15 (external edit)