When: March 2, 2016, 12:30 PM
Location: 3rd Floor Orchard View Room , Discovery Building
Contact: 608-316-4401, email@example.com
A Control Perspective on Optimization of Strongly Convex Functions
We present our recent progress on adopting control tools for optimization of strongly convex functions. First, built upon the existing integral quadratic constraint (IQC) analysis framework of first-order optimization methods, we further develop an IQC approach to analyze the stochastic average gradient (SAG) method. The SAG method is formulated as a stochastic jump system perturbed with static nonlinearities described by almost sure IQCs. Almost sure IQCs directly lead to mean square IQCs, which can be combined with the dissipation inequalities for stochastic jump systems to derive linear matrix inequality (LMI) testing conditions for the SAG method. New upper bounds on the convergence rates of the SAG method are obtained by solving the resultant semidefinite programming problems. Second, we present a parameter-varying system approach for the analysis of first-order optimization methods. When the objective function is twice differentiable, the mean value theorem can be used to convert many optimization methods into parameter-varying systems. We will show how to use the parameter-varying system theory from control field to recover some analysis results in optimization field. Finally, we will discuss how to pose the optimization problems as control problems. We will explain how to view optimization methods as controllers, and try to draw some connections between optimization methods and control architectures. For example, we will see that there is a derivative controller embedded in the Nesterov’s method. We will discuss the possibilities of applying various controllers (proportional-integral-derivative control, sliding mode control, etc.) as optimization algorithms.
Bio: Bin Hu received his Bachelor’s degree from University of Science and Technology of China in 2008 and Master’s degree from Carnegie Mellon University in 2010. Currently he is a Ph.D. candidate at the Aerospace Engineering and Mechanics Department at University of Minnesota, twin cities. Meanwhile he is pursuing a Ph.D. minor in Statistics. He has broad interests in bridging advanced mathematical tools with engineering applications. Currently his research focuses on tailoring control theory for large-scale optimization and machine learning problems.
SILO is a lecture series with speakers from the UW faculty, graduate students or invited researchers that discuss mathematical related topics. The seminars are organized by WID’s Optimization research group.
SILO’s purpose is to provide a forum that helps connect and recruit mathematically-minded graduate students. SILO is a lunch-and-listen format, where speakers present interesting math topics while the audience eats lunch.