
Optimization: Computation, Theory and Modeling
(01 Nov  23 Dec 2012)
~ Abstracts ~
Numerical simulation of piecewiselinear models of gene regulatory networks using complementarity systems Vincent Acary, INRIA RhôneAlpes, France
In this work, we look at one particular class of approximate models of gene regulatory networks, socalled piecewiselinear (PWL) models [1]. The PWL models are systems of coupled differential equations in which the variables denote concentrations of gene products, typically proteins. The rate of change of a concentration at a particular timepoint may be regulated by other proteins through direct or indirect interactions. The PWL models capture these regulatory effects by means of step functions that change their value in a switchlike manner at threshold concentrations of the regulatory proteins. The step functions are approximations of the sigmoidal response functions often found in gene regulation. PWL models with step functions have favorable mathematical properties, which allows for the analysis of steady states, limit cycles, and their stability [2,3]. The use of step functions, however, leads to discontinuities in the righthand side of the differential equations, due to the abrupt changes of the value of a step function at its threshold. In order to deal with the discontinuities, several authors have proposed the use of differential inclusions and Filippov solutions. The proposals to extend PWL models to differential inclusions differ in subtle but nontrivial ways, giving rise to systems with nonequivalent dynamics[4]. The aim of this study is to propose a theoretically sound and practically useful method for the numerical simulation of gene regulatory networks described by PWL models. We notably show that the alternative AizermanPyatnitskii extension of PWL models can be reformulated in the framework of complementarity systems or differential variational inequalities[5,6,7]. This allows us to employ the rich store of numerical methods available, for these and other classes of discontinuous systems[8,9]. Moreover, we show that under two reasonable biological assumptions, posing constraints on the admissible network structures, the different Fillipov extensions that have been proposed, as well as the hyperrectangular overapproximation, are equivalent. This means that the numerical simulation approach developed in this paper is valid for a range of different solution concepts for PWL models of gene regulatory networks. We illustrate the interest of our numerical simulation approach by means of the analysis of three synthetic networks published in the literature: the repressilator, an oscillator with positive feedback, and the IRMA network. We develop PWL models of these networks, either from scratch or by adapting existing ODE models, and numerically simulate the dynamic response of these networks to external stimuli. The simulations are shown to reproduce known qualitative features of these networks, notably the capability to generate (damped) oscillations for the first two networks, and a switchon/switchoff response after a change in growth medium for the third. We believe these examples demonstrate that the numerical simulation approach developed in this paper provides a useful extension of the toolbox of modelers of gene regulatory networks.
References:
[1] L. Glass and S.A. Kauffman (1973). The logical analysis of continuous nonlinear biochemical control networks. Journal of Theoretical Biology, 39 (1), 103129.
[2] R. Edwards (2000). Analysis of continuoustime switching networks. Physica D, 146 (14), 165199.
[3] H. de Jong, J.L. Gouz ´e, C. Hernandez, M. Page, T. Sari, and J. Geiselmann (2004). Qual itative simulation of genetic regulatory networks using piecewiselinear models. Bulletin of Mathematical Biology, 66 (2), 301340.
[4] A. Machina and A. Ponosov (2011). Filippov solutions in the analysis of piecewise linear models describing gene regulatory networks. Nonlinear Analysis, 74, 88900.
[5] J.S. Pang and D. Stewart. Differential variational inequalities. Mathematical Programming A, 113(2), 2008.
[6] W.P.M.H. Heemels, J.M. Schumacher, and S. Weiland. Linear complementarity systems. SIAM J. Appl. Math, 60:12341269, 2000.
[7] Vincent Acary and Bernard Brogliato. Numerical methods for nonsmooth dynamical systems. Applications in mechanics and electronics. Lecture Notes in Applied and Computational Mechanics 35. Berlin: Springer. xxi, 525 p. , 2008.
[8] F. Facchinei and J. S. Pang. Finitedimensional Variational Inequalities and Complementarity Problems, volume I & II of Springer Series in Operations Research. Springer Verlag NY. Inc., 2003.
[9] S.C. Billups, S.P. Dirkse, and M.C. Ferris. A comparison of large scale mixed complementarity problem solvers. Computational Optimization and Applications, 7:325, 1997.
« Back... A firstorder framework for solving ellipsoidal inclusion problems Selin Damla Ahipasaoglu, Singapore University of Technology and Design
The first part of the talk will focus on the ellipsoidal inclusion and the optimal experimental design problems in parametric form. We will show that these two problems are dual to each other (for a fixed pair of parameters) and derive the optimality conditions for both. These will lead into the development of a firstorder framework for obtaining approximate solutions, where the approximation error can be arbitrarily small. We will survey recent results for special cases including the Minimum Volume Enclosing Ellipsoid problem, especially focusing on the global and local convergence properties of the firstorder algorithm. In the second part, we will extend the framework and some of the complexity results to the cylindrical inclusion problem, which is significantly more challenging than the ellipsoidal inclusion problem.
« Back... Expected utility combinatorial optimization: submodular maximization approaches Shabbir Ahmed, Georgia Institute of Technology, USA
We address combinatorial optimization problems with random objective coefficients where the goal is to maximize the expected utility of the objective value. In risk averse settings, using the sample average approximation framework, we can approximate such problems as deterministic submodular maximization problems.
In the first part of the talk we consider expected utility knapsack problems. We show that a polynomial number of samples is enough for a deterministic approximation that is close in relative error. Then, exploiting the strict monotonicity of typical utility functions, we present an algorithm that maximizes an increasing submodular function over a knapsack constraint with an approximation ratio better than 11/e. For power utility functions we provide explicit approximation ratios leading to a polynomial time approximation algorithm.
In the second part of the talk we consider mixed integer programming (MIP) approaches for expected utility combinatorial optimization problems. A standard MIP formulation using submodular inequalities is ineffective, except for very small instances. We perform a polyhedral analysis of a relevant mixedinteger set and, by exploiting the structure of the utility function, strengthen the standard submodular formulation significantly. The main technique used is subadditive lifting by solving special submodular maximization problems. Computational experiments on expected utility maximization in capital budgeting show the effectiveness of the approach.
The first part of the work is based on work with Jiajin Yu at Georgia Tech and the second part of the talk is based on work with Alper Atamturk at UC Berkeley.
« Back... Solving conic optimization problems with MOSEK Erling Andersen, MOSEK ApS, Denmark
MOSEK is a software package aimed at solving largescale conic optimization problems. Now for more than 10 years MOSEK has been able to solve both linear and conic quadratic problems using an interiorpoint method. However, the upcoming MOSEK version 7 has been extended to handle semidefinite optimization problems. Therefore, in this talk we will discuss the new semidefinite optimization capabilities in MOSEK. This discussion includes a presentation of the available interfaces and implemented algorithms. Finally, we will present benchmark results demonstrating the semidefinite optimization capabilities of MOSEK.
« Back... A compliant viscoplastic particle contact model based on differential variational inequalities Mihai Anitescu, Argonne National Laboratory, USA
This work describes an approach to the simulation of contacts with compliance and damping between three dimensional shapes using the framework of the Differential Variational Inequality (DVI) theory. Within the context of nonsmooth dynamics, we introduce an extension to the classical setvalued model for frictional contacts between rigid bodies, allowing contacts to experience local compliance, viscosity and plasticization. Different types of yield surfaces can be de fined for various types of contact, a versatile approach that contains the classic dry Coulomb friction as a special case. The resulting problem is a differential variational inequality that can be solved, at each integration time step, as a variational inequality over a convex set.
Mihai Anitescu; joint work with Alessandro Tasora, Dan Negrut, and Silvia Negrini.
« Back... Signal reconstruction from the magnitude of subspace components Christine Bachoc, Université Bordeaux 1, France
We consider the situation where a $d$dimensional signal is projected over a collection a $k$dimensional vector subspaces $\{V_1,\dots,V_n\}$ and we wish to reconstruct the signal from the norms of its projections. We will discuss and compare on one hand exact reconstruction under certain algebraic properties of the collection of subspaces, and on the other hand reconstruction using semidefinite programming for a random choice of subspaces.
« Back... Modeling and solving dynamic user equilibria using differential complementarity systems Xuegang (Jeff) Ban, Rensselaer Polytechnic Institute, USA
Dynamic user equilibria (DUE) aim to predict the timevarying traffic states of a dynamic traffic network. Modeling and solving DUE is one of the most challenging problems in transportation. We present our recent collaborative research of applying differential complementarity systems (DCS) for modeling and solving DUE in continuoustime. The proposed method can capture the two unique aspects of DUE: drivers? choice behavior (route and departure time choices) and traffic dynamics. In particular, a specific traffic queuing model is integrated to capture realistic traffic phenomena such as spillover of congestion (queues) from one location to its upstream locations in a traffic network. We also show that applying the DCS framework allows us to conduct rigorous analysis of the resulting model such as solution convergence and stability (from a discretetime solution to a continuoustime solution), and selections of the discretization scheme and discrete time step size.
« Back... A computationally tractable theory of performance analysis in stochastic systems Dimitris Bertsimas, Massachusetts Institute of Technology, USA
Modern probability theory, whose foundation is based on the axioms set forth by Kolmogorov, is currently the major tool for performance analysis in stochastic systems. While it offers insights in understanding such systems, probability theory is really not a computationally tractable theory. Correspondingly, some of its major areas of application remain unsolved when the underlying systems become multidimensional: Queueing networks, network information theory, pricing multidimensional financial contracts, auction design in multiitem, multibidder auctions among others.
We propose a new approach to analyze stochastic systems based on robust optimization. The key idea is to replace the Kolmogorov axioms as primitives of probability theory, with some of the asymptotic implications of probability theory: the central limit theorem and law of large numbers and to define appropriate robust optimization problems to perform performance analysis. In this way, the performance analysis questions become highly structured optimization problems (linear, conic, mixed integer) for which there exist efficient, practical algorithms that are capable of solving truly large scale systems.
We demonstrate that the proposed approach achieves computationally tractable methods for (a) analyzing multiclass queueing networks, (b) characterizing the capacity region of network information theory and associated coding and decoding methods generalizing the work of Shannon, (c) pricing multidimensional financial contracts generalizing the work of Black, Scholes and Merton, (d) designing multiitem, multibidder auctions generalizing the work of Myerson.
This is joint work with my doctoral student at MIT Chaithanya Bandi.
Short Bio:
Dimitris Bertsimas is currently the Boeing Leaders for Global Operations Professor of Management and the codirector of the Operations Research Center at the Massachusetts Institute of Technology. He has received a BS in Electrical Engineering and Computer Science at the National Technical University of Athens, Greece in 1985, a MS in Operations Research at MIT in 1987, and a Ph.D. in Applied Mathematics and Operations Research at MIT in 1988. Since 1988, he has been in the MIT faculty. His research interests include optimization, stochastic systems, data mining, and their applications. In recent years he has worked in robust optimization, health care and finance. He has published over 150 papers and 3 graduate level textbooks. He is a member of the National Academy of Engineering and he has received several research awards including the Farkas prize, the SIAM Optimization prize and the Erlang Prize. He has supervised 48 doctoral students at MIT and he is currently supervising 11 others.
« Back... Computational and sample tradeoffs via convex relaxation Venkat Chandrasekaran, California Institute of Technology, USA
In modern data analysis, one is frequently faced with statistical inference problems involving massive datasets. Processing such large datasets is usually viewed as a substantial computational challenge. However, if data are a statistician's main resource then access to more data should be viewed as an asset rather than as a burden. In this talk we discuss a computational framework based on convex relaxation to reduce the computational complexity of an inference procedure when one has access to increasingly larger datasets. Essentially, the statistical gains from larger datasets can be exploited to reduce the runtime of inference algorithms. (Joint work with Michael Jordan.)
« Back... Residual minimization for stochastic variational inequalities Xiaojun Chen, The Hong Kong Polytechnic University, Hong Kong
This talk presents a new expected residual minimization formulation for a class of stochastic variational inequalities by using the gap function. The objective function of the expected residual minimization problem is nonnegative and Lipschitz continuous. Moreover, it is convex for some stochastic linear variational inequalities, which helps us guarantee the existence of a solution and convergence of approximation methods. We propose a globally convergent (a.s.) smoothing sample average approximation (SSAA) method to minimize the expected residual function. We show that the residual minimization problem and its SSAA problems have minimizers in a compact set and any cluster point of minimizers and stationary points of the SSAA problems is a minimizer and a stationary point of the expected residual minimization problem (a.s.). Our examples come from applications involving traffic flow problems. We show that the conditions we impose are satisfied and that the solutions, efficiently generated by the SSAA procedure, have desirable properties.
Joint work with H. Fang, M. Fukushima, G. Lin, A. Sumalee, R.JB Wets, C. Zhang, Y. Zhang
« Back... Semismooth Newton and two phase active set methods Gillian Chin, Northwestern University, USA
We present a semismooth Newton framework that gives rise to a rich class of first and second order methods for structured convex optimization. The generality of our approach allows us to analyze their convergence properties in a unified setting, and to contrast their algorithmic components. These methods are well suited for a variety of machine learning applications, and include a diverse range of techniques such as orthant based methods, block active set methods and a second order iterative softthresholding method. We will also propose a new block active set method that performs multiple changes in the active manifold estimate and incorporates a novel mechanism for correcting the estimates when needed.
« Back... AMPL tutorial on solving complementarity problems SouCheng Choi, The University Chicago, USA
AMPL is an expressive programming language for modeling and solving optimization problems. The de facto AMPL solver of choice for solving complementarity problems is PATH developed by Dirkse, Ferris, and Munson. This tutorial introduces basic elements in AMPL (version 20120629) and showcases sample programs for solving complementarity problems via PATH (version 4.7.03). We feature a case study from CIMEARTH, a framework for solving largescale computable general equilibrium models with applications in climate change and economic policies.
This is joint work with Todd Munson.
« Back... Barrierbased smoothing proximal point algorithm for nonlinear complementarity problems over closed convex cones ChekBeng Chua, Nanyang Technological University
We present a new barrierbased method of constructing smoothing approximations for the Euclidean projector onto closed convex cones. These smoothing approximations are used in a smoothing proximal point algorithm to solve monotone nonlinear complementarity problems (NCPs) over a convex cones via the normal map equation. The smoothing approximations allow for the solution of the smoothed normal map equations with Newton?s method, and do not require additional analytical properties of the Euclidean projector. The use of proximal terms in the algorithm adds stability to the solution of the smoothed normal map equation, and avoids numerical issues due to ill conditioning at iterates near the boundary of the cones. We prove a sufficient condition on the barrier used that guarantees the convergence of the algorithm to a solution of the NCP. The sufficient condition is satisfied by all logarithmically homogeneous barriers. Preliminary numerical tests on semidefinite programming problems (SDPs) shows that our algorithm is comparable with the NewtonCG augmented Lagrangian algorithm (SDPNAL) proposed in [X. Y. Zhao, D. Sun, and K.C.Toh, SIAM J. Optim. 20 (2010), 17371765]
« Back... Special linear complementarity problems Richard W Cottle, Stanford University, USA
This talk will address two related topics on Linear Complementarity Problems (LCPs) having special properties. One of these concerns conditions that allow for their (strongly) polynomialtime solution by standard pivoting algorithms of the field. The other involves conditions that assure the existence of solutions having support of prescribed cardinality.
« Back... Twostage stochastic optimization problems with stochastic ordering constraints Darinka Dentcheva, Stevens Institute of Technology, USA
Stochastic orders formalize preferences among random outcomes and are widely used in statistics and economics. We focus on stochastic optimization problems involving stochastic ordering relations as constraints that relate performance functionals, depending on our decisions, valued in an appropriate Lpspace, to benchmark random outcomes. Necessary and sufficient conditions of optimality and duality theory for these problems involve expected utility theory, dual (rankdependent) utility theory, and coherent measures of risk, providing a link between various approaches for riskaverse optimization.
The most prominent stochastic orders are the first and second order stochastic dominance relations, and the increasing convex order. These relations are defined by a continuum of compositions of convex nonsmooth functions with possibly nonconvex smooth functions. The results contribute to the theory of semiinfinite and composite optimization in vector spaces.
The talk is focused on the use of the stochastic order as a constraint in twostage stochastic optimization problems. We present new characterizations of the increasing convex order relation. We propose decomposition methods to solve the problems and prove their convergence. Numerical results confirm the efficiency of the methods. Some applications will be outlined.
« Back... Solving quasivariational inequalities: theoretical and numerical results Francisco Facchinei, Sapienza  Università di Roma, Italy
We propose to solve a general quasivariational inequality by using its KarushKuhnTucker conditions. To this end we use a globally convergent algorithm based on a potential reduction approach. We establish global convergence results for many interesting instances of quasivariational inequalities, vastly broadening the class of problems that can be solved with theoretical guarantees. The algorithm has been implemented and tested on a vast array of problems, yelding very promising results.
« Back... The price of energy storage Michael Ferris, University of WisconsinMadison, USA
Uncertainty in forecasts and hourly fluctuations in demand are important stochastic effects that can be mitigated by storage (new batteries, pumping water uphill, charging EPV's). Determining the value of storage is a critical component to stimulate design and implementation of energy storage facilities and an associated market. We propose a multiperiod stochastic MOPEC (multiple optimization problems with equilibrium constraints) formulation for the economic planning problem and exhibit the use of hydro, thermal, nuclear and renewable energy sources, coupled with mechanisms to store energy between periods and endogenously determine the price for storage. Extensions of the framework that facilitate hitting environmental goals within a noncooperative generation capability will also be outlined. The model is implemented using the GAMS/EMP framework, and solved using a combination of the PATH solver, and global optimization solvers.
« Back... The LPNewton method and beyond  Solving nonsmooth equations with nonisolated solutions Andreas Fischer, Dresden University of Technology, Germany
The problem of solving a system of possibly nonsmooth equations appears in several applications. For example, complementarity problems, necessary conditions for generalized Nash equilibrium problems, or KarushKuhnTucker conditions of an inequality constrained optimization problem can be written in this way. The LPNewton method requires the solution of a linear program per step and can be successfully applied to nonsmooth equations with nonisolated solutions under conditions that are weaker than existing ones. A local iterative framework for solving systems of equations under convex constraints will be presented that contains the LPNewton method as particular case. The framework shares the same weak conditions for local superlinear convergence. Different algorithms belonging to the framework will be described.
Coauthors:
Francisco Facchinei, University of Rome La Sapienza Markus Herrich, TU Dresden
« Back... Speeding up the spectral bundle method by solving the quadratic semidefinite subproblems with a PSQMR approach Christoph Helmberg, Chemnitz University of Technology, Germany
The spectral bundle method is tuned to solving semidefinite programs (SDP) with large semidefinite matrix variables having constant or bounded trace. It exploits that these can be formulated equivalently as problems of minimizing the maximum eigenvalue of affine matrix functions and uses repeated eigenvalue computations to form a semidefinite model of the objective function. The next candidate point is determined as the minimizer of the model plus a quadratic augmenting term. Solving this quadratic semidefinite subproblem by interior point methods formed the bottleneck whenever bundle sizes increased. We report on our experience with a preconditioned symmetric quasi minimal residual (PSQMR) approach for computing the Newton step in this interior point method like in the package QSDP. On many of our randomly generated test instances this results in significant savings. Indeed, the cost of the interior point method is then mostly negligible in comparison to the cost of computing the eigenvalues and the cost matrices of the quadratic semidefinite subproblem. For less random data we are still evaluating the approach.
Joint work with KimChuan Toh, National University of Singapore.
« Back... Several approaches for the derivation of stationarity conditions for elliptic mpecs with upperlevel control constraints Michael Hintermuller, HumboldtUniversität zu Berlin, Germany
The derivation of multiplierbased optimality conditions for elliptic mathematical programs with equilibrium constraints (MPEC) is essential for the characterization of solutions and development of numerical methods. Though much can be said for broad classes of elliptic MPECs in both polyhedric and nonpolyhedric settings, the calculation becomes significantly more complicated when additional constraints are imposed on the control. In this talk we develop three derivation methods for constrained MPEC problems: via concepts from variational analysis, via penalization of the control constraints, and via penalization of the lowerlevel problem with the subsequent regularization of the resulting nonsmoothness. The developed methods and obtained results are then compared and contrasted.
« Back... Pareto efficiency in robust optimization Dan Iancu, Stanford University, USA
In this talk, we formalize and adapt the wellknown concept of Pareto efficiency in the context of the popular robust optimization (RO) methodology. We argue that the classical RO paradigm need not produce solutions that possess the associated property of Pareto optimality, and illustrate via examples how this could lead to inefficiencies and suboptimal performance in practice. We provide a basic theoretical characterization of Pareto robustly optimal (PRO) solutions, and extend the RO framework by proposing practical methods that verify Pareto optimality, and generate solutions that are PRO. Critically important, our methodology involves solving optimization problems that are of the same complexity as the underlying robust problems, hence the potential improvements from our framework come at essentially no computational cost. We perform numerical experiments drawn from three different application areas (portfolio optimization, inventory management, and project management), which demonstrate that PRO solutions have a significant upside compared with solutions obtained via classical RO methods, at no extra cost or downside. ( Joint work with Nikolaos Trichakis )
« Back... Symmetric extensions of the NT direction Florian Jarre, Heinrich Heine Universität Düsseldorf, Germany
A stable symmetrization of the linear systems arising in interiorpoint methods for solving linear programs is introduced. A comparison of the condition numbers of the resulting interiorpoint linear systems with other commonly used approaches indicates that the new approach may be best suitable for an iterative solution. It is shown that there is a natural generalization of this symmetrization to the NT search direction for solving semidefinite programs. The generalization heavily relies on the symmetry properties of the NT direction and may not be applicable to any of the other search directions commonly used in semidefinite programming.
The search directions generated by the iterative solver will have fairly low relative accuracy. A new interiorpoint approach is presented using such low accuracy search directions. In the numerical examples, this approach results in a surprisingly small number of outer iterations, indicating that the interiorpoint concept may also be suitable for illconditioned problems for which it is difficult to compute high accuracy search directions.
« Back... Improved lower bounds for book crossing numbers of complete bipartite graphs using semidefinite programming Etienne de Klerk, Nanyang Technological University
The crossing number problem for graphs is to draw a graph in the plane with a minimum number of edge crossings. Crossing numbers are of interest for graph visualization, VLSI design, quantum dot cellular automata, RNA folding, and other applications. On the other hand, the problem is notoriously difficult. In 1973, Erdös and Guy wrote that: "Almost all questions that one can ask about crossing numbers remain unsolved." For example, the crossing numbers of complete and complete bipartite graphs are still unknown in general. Moreover, even for cubic graphs, it is NPhard to compute the crossing number. Different types of crossing numbers may be defined by restricting drawings; one example is a kpage (book) drawing where all vertices must be drawn on a line (the spine of the book), and all edges must be drawn on one of k halfplanes incident to the line (the k book pages). The kpage crossing number of a graph is the minimum number of edge crossings taken over all kpage drawings. In this talk we will discuss recent progress on lower bounds for the kpage crossing number of complete bipartite graphs. Our results are computerassisted, and the main computational tool we employ is semidefinite programming. (Joint work with Dmitrii V. Pasechnik and Gelasio Salazar.)
« Back... Economics of collective monitoring: a study of environmentally constrained electricity generators Jacek Krawczyk, Victoria University of Wellington, New Zealand
This paper tackles the issue of welfare differences between individual and collective monitoring of pollution. Emissions by thermal electricity generators are an externality that a regulator might wish to cap. In addition, electrical transmission capacity may be limited by the grid facilities. Together, they impose constraints on the generators? joint strategy space, thus coupling their strategies. To calculate the optimal emission cap for each generator to be enforced through individual monitoring, the regulator has to solve a constrained welfaremaximisation problem, where the constraint is the ambient pollution standard. Those calculated caps will maximise welfare, but may be expensive because its implementation cost will rise with the number of generators. Alternatively, the regulator could implement collective monitoring, using a single ambient pollution detector. This should ease the implementation cost, but lower welfare. We use a threenode twothermalgenerator network model to discuss and explain the behaviour of economic agents owning the generators, subjected to the coupled constraints. We find that the welfare loss from collective monitoring can be small. We also learn that the imposition of transmission and environmental restrictions may benefit the less efficient generator and shift surplus share towards the emitters.
« Back... Dynamic interdiction games on Markovian PERT networks Daniel Kuhn, Imperial College London, UK
We study stochastic interdiction games where an interdictor endeavors to maximally delay the expected completion time of a project that is managed according to the early start policy. The interdictor can deploy a nonrenewable resource to interdict individual project tasks, thereby prolonging their uncertain durations. Given limited resource availability, the goal is to decide when to interdict which tasks in order to inflict maximum disruption to the project. We formulate the interdictor's decision problem as a multiple optimal stopping problem in continuous time and with decisiondependent information. Under a memoryless probabilistic model, we prove that this problem is equivalent to a Markov Decision Process (MDP) in discrete time that can be solved via efficient value iteration. The basic model is then generalized to account for implementation uncertainty. We also discuss a crashing game where the project manager can use a limited renewable resource to expedite certain tasks in order to counterbalance the interdictor's efforts. The resulting optimal stopping problem can be reformulated as a robust MDP.
CoAuthors: Eli Gutin and Wolfram Wiesemann
« Back... Moments, positive polynomials and optimization Jean Lasserre, LAASCNRS, France
In this series of lectures we will describe the basics of the theory of moments and positive polynomials for polynomial optimization. In particular, and if time permits:
 The representation theorems of Schmudgen, Putinar, Stengle for polynomials that are positive on a subset K or Rn.
 Their counterparts for the Kmoment problem.
 The hierarchy of LP and semidefinite relaxations for polynomial optimization.
 Contrasting LP and semidefinite relaxations
 Putinar's certificate of optimality versus KarushKuhnTucker.
 Another look at nonnegativity
 Convexity.
« Back... Semidefinite programming, matrix completion and geometric graph realizations Monique Laurent, Centrum Wiskunde & Informatica and Tilburg University, The Netherlands
We consider the problem of completing a partially specified matrix to a positive semidefinite matrix, with special focus on questions related to the smallest possible rank of such completion. We present complexity results and structural characterizations of the graph of specified entries for the existence of small rank completions, as well as links to Euclidean graph realizations in distance geometry and to some Colin de Verdièretype graph parameters. The geometry of semidefinite programming provides a unifying setting and sometimes new simpler proofs, as is the case for instance for Connelly's result about universal rigidity of frameworks.
Based on joint works with Marianna EisenbergNagy, Budapest, and Antonios Varvitsiotis, Amsterdam.
« Back... Distributional robust optimization: univariate and multivariate marginal bounds Karthik Natarajan, Singapore University of Technology and Design
In this talk, I will discuss distributional robust bounds that uses univariate and multivariate marginals and is particularly suitable for optimization problems.
This is joint work with Xuan Vinh Doan (University of Warwick), Shi Dongjian (NUS) and Toh Kim Chuan (NUS).
« Back... An equilibrium model of a congested oligopolistic electricity market with an imperfect cap and trade market for CO2 permits Shmuel Oren, University of California at Berkeley, USA
The impact and efficacy of capandtrade regulation on the electric power industry depend on interactions of demand elasticity, transmission network, market structure, and strategic behavior of generation firms. This talk introduces an equilibrium model of an oligopoly electricity market in conjunction with a CO2 permit market to study such interactions. The concept of conjectural variations is employed to account for imperfect competition in the permit market. For linear demand functions and quadratic cost functions, the resulting model is represented by an LCP consisting of the KKT conditions corresponding to the optimization problems of the interacting agents in the system. The model is applied to a reduced WECC 225bus system with a detailed representation of the California network. In particular, we examine the extent to which permit trading strategies affect the market outcome and how such outcomes depend on the initial allocation of permits.
« Back... Subspace accelerated matrix splitting algorithms for asymmetric and symmetric linear complementarity problems Daniel P. Robinson, Johns Hopkins University, USA
I present a new twophase method for solving both asymmetric and symmetric linear complementarity problems that consist of an active set prediction phase and an acceleration phase. The prediction phase employs matrix splitting iterations that are tailored to the structure of the linear complementarity problem. In the asymmetric case, the formidable task of pairing an acceleration phase with matrix splitting iterations is achieved by exploiting a contraction property associated with certain matrix splittings. For symmetric problems, a similar task is achieved by utilizing decent properties of specific matrix splitting iterations and projected searches. The superior optimal activeset identification property of matrix splitting iterations is illustrated with numerical experiments, which also demonstrate the general efficiency of the proposed methods.
« Back... Computing Nash equilibria: recent advances JongShi Pang, University of Illinois at UrbanaChampaign, USA
We present a summary of recent advances on the computation of Nash equilibria. Topics covered include: (a) a theory of nonconvex games, (b) computation of generalized affine Nash equilibria by Lemke's method, (c) optimal selection of monotone Nash equilibria, and (d) symmetric and asymmetric differential Nash equilibria.
The presentation is based on joint work with several colleagues and a graduate student.
« Back... Inverse moment problems on polytopes and optimization Dmitrii Pasechnik, Nanyang Technological University
We discuss inverse moment problems for measures supported on polytopes, convex and nonconvex, and how recently found exact approaches to the latter can be used in optimization.
References:
The talk with be based on
http://arxiv.org/abs/1210.3193
http://arxiv.org/abs/1106.5723 and
http://arxiv.org/abs/1209.4014
« Back... Stochastic otimization for resource allocation with random emergencies Georgia Perakis, Massachusetts Institute of Technology, USA
A problem arising in many industries is how to allocate a limited set of resources to perform a specified set of jobs. These resources are often shared by emergency jobs that arrive randomly in the future. Thus, the allocation needs to be flexible to incorporate these future emergencies. In this talk, we develop and analyze new formulation and solution methods for this problem that allow us to perform interesting analytics in important business settings. Our work is motivated by a gas utility's problem of allocating service crews to two types of jobs: standard jobs and emergency gas leak repair jobs. Standard jobs are known in advance, but need to be scheduled before their deadlines. Emergency gas leak jobs arrive randomly throughout the day, but need to be attended to as soon as they arrive.
We propose a twophase decomposition for the problem. The first phase is a job scheduling phase, where standard jobs are scheduled on days before the deadlines, but without overloading a day with too much work. The second phase is a crew assignment phase, which assigns jobs scheduled on each day to crews under the assumption that a random number of gas leak jobs will arrive later in the day. For the first phase, we propose a scheduling algorithm based on linear programming which is easy to implement with commercial solvers. We prove a datadriven performance guarantee for this algorithm. For the second phase, we propose an algorithm for assigning crews based on the structure of the optimal crew assignment. In simulations, both algorithms are computationally efficient and result in allocations almost as "good'' as the optimal allocation. The models and algorithms we have developed can be applied to other settings where resources need to be allocated flexibly to handle random emergencies.
In collaboration with a large multistate utility, we use our models for conducting analytics to develop strategies for the company in creating flexibility for handling random emergencies. In simulations using actual data and our models, we highlight how process changes impact crew utilization and labor costs. We demonstrate the financial impact of these new business processes on a utility with an annual labor cost of $1 billion. Simulations reveal that creating new business processes can potentially reduce annual overtime hours by 22%, resulting in a $84 million reduction in the utility's annual labor cost.
This is joint work with Joline Uichanco (MIT) and Sid Balwani (MIT) and together with industry partners Mallik Angalakudati, Jorge Calzada, Bikram Chatterjee, and Nick Raad.
« Back... Equilibrium, uncertainty and risk in electricity pool markets Andrew B. Philpott, The University of Auckland, New Zealand
The correspondence of competitive partial equilibrium with a social optimum is well documented in the welfare theorems of economics. These theorems apply to singleperiod electricity pool auctions in which pricetaking agents maximize profits at competitive prices, and extend naturally to standard models with locational marginal prices. In markets where the auctions are repeated over many periods, agents that are subject to intertemporal constraints (such as ramping or fuel restrictions) offer energy to any singleperiod auction to optimize their current and future profit accounting for possible values of future prices. This can lead to welfare losses, for the market at the current point in time might not deliver the right future price information to the agents in the presence of uncertainty in environmental factors affecting future prices. The situation is complicated by risk averse agents and a lack of market instruments to hedge risk. We illustrate some of the consequences of these factors on market outcomes using simple competitive equilibrium models in which agents are endowed with coherent risk measures.
(Joint work with Roger Wets and Michael Ferris).
« Back... Risk and competition in hydrodominated electricity markets Andrew B. Philpott, The University of Auckland, New Zealand
A major concern for regulators of electricity markets is the extent to which observed prices are marked up over a socalled perfectly competitive benchmark. These markups can be created by large generating firms choosing their asking prices strategically, so that the clearing price is higher than what one would expect in perfect competition. Estimating these markups is relatively straightforward in markets with thermal generation having no intertemporal linkages. Here the competitive benchmark comes in each trading period from agents offering energy at shortrun marginal cost. It is more complicated in hydrothermal systems with uncertain inflows, where the risk of energy shortages affects the opportunity cost of hydro supply. If generators are riskneutral, then a competitive equilibrium can be computed in principle by solving a largescale stochastic optimal control problem. For riskaverse agents, this becomes a (risked) competitive equilibrium problem that is computable for simple models. We show the preliminary results of our experiments with these models that illustrate some of the interesting insights that can be drawn from them.
(Joint work with Roger Wets and Michael Ferris)
« Back... Computing the nearest Euclidean distance matrix with low embedding dimensions Houduo Qi, University of Southampton, UK
Euclidean distance embedding appears in many highprofile applications including wireless sensor network localization, where not all pairwise distances among sensors are known or accurate. The classical MultiDimensional Scaling (cMDS) generally works well when the partial or contaminated Euclidean Distance Matrix (EDM) is close to the true EDM, but otherwise performs poorly. A natural step preceding cMDS would be to calculate the nearest EDM to the known matrix. A crucial condition on the desired nearest EDM is for it to have a low embedding dimension and this makes the problem nonconvex. There exists a large body of publications that deal with this problem. Some try to solve the problem directly and some are the type of convex relaxations of it. In this paper, we propose a numerical method that aims to solve this problem directly. Our method is strongly motivated by the majorized penalty method of Gao and Sun for lowrank positive semidefinite matrix optimization problems. The basic geometric object in our study is the set of EDMs having a low embedding dimension. We establish a {\em zero} duality gap result between the problem and its Lagrangian dual problem, which also motivates the majorization approach adopted. Numerical results show that the method works well for the Euclidean embedding of Network coordinate systems and for a class of large scale sensor network localization problems.
« Back... Stochastic equilibrium in investment models: capacity expansion in the power sector Daniel Ralph, University of Cambridge, UK
An investor in power generation assets faces unprecedented uncertainty on the evolution of the sector. The market equilibrium is hence one under uncertainty. Agents can be risk neutral or risk averse. We therefore insert risk functions in order to account for idiosyncratic risk (risk that is not priced by the CAPM) in investments. Incorporating a risk function on the cost in a standard (stochastic) capacity expansion planning model can be done and we retain a convex program but it poses questions on the interpretation. We structure the discussion the interpretation around market completeness: In a world of perfect risk trading we can derive a price vector for all instruments from a system risk function. The complete market can be represented in terms of stochastic programming. The assumption of perfect risk trading is however rather heroic for investments that last 20 to 30 years. We hence relax the assumption of perfect risk trading and allow for different stochastic discount factors. The interpretation becomes more dif?cult since the incomplete market is no longer amenable to a stochastic optimization approach.
Coauthors:
Andreas Ehrenmann & Gauthier de Maere d?Aertrycke, GDFSuez, CEEMS, Brussels
Yves Smeers, Universit´e Catholique de Louvain, CORE
« Back... Towards higher order semidefinite relaxations for maxcut Franz Rendl, University of Klagenfurt, Austria
The basic semidefinite relaxation for MaxCut, underlying the GoemansWilliamson hyperplane rounding procedure, allows various tightenings. The simplest one includes constraints from the metric polytope. More refined approaches are iterative, and provide a sequence of relaxations, which come arbitrarily close to the convex hull of cut vectors, but at an increasingly high computational effort. A natural systematic hierarchy was introduced by Lasserre. The first step in this hierarchy corresponds to the basic semidefinite relaxation, where the optimization is done over the set of correlation matrices. The second one is a relaxation in the space of matrices of order $n^{2}$. We propose an iterative refinement of the basic semidefinite relaxation intersected with the metric polytope. These refinements are obtained by semidefiniteness constraints on specially selected submatrices of the Lasserre matrix of order $n^2$. We provide some theoretical insights as well as first computational experience.
(joint work with Elsbeth Adams and Miguel Anjos (Montreal) and Angelika Wiegele (Klagenfurt))
« Back... Risk and regret in stochastic optimization R. Tyrrell Rockafellar, University of Washington, USA
Measures of risk have a role in optimization under uncertainty that is well recognized by now, although many new examples keep coming up. The role of measures of regret, which relates to penalties on constraints, is less understood but in fact has profound interaction with risk. This talk will explain the connection and its important advantages for computation. A key feature is generalization of the VaRCVaR minimization formula to other risk measures beyond CVaR.
« Back... Quantitative stability analysis of stochastic generalized equations Werner Römisch, HumboldtUniversität zu Berlin, Germany
First, we discuss two approaches to stability in stochastic optimization:
(i) using perturbation theory for optimization problems,
(ii) using stability analysis of generalized equations. We consider the second approach in more detail and consider stochastic generalized equations (SGE), which are given by the expected value of random setvalued mappings. SGEs characterize optimality conditions for stochastic optimization and equilibrium problems. We derive quantitative continuity of solution set mappings to SGEs with respect to the variation of the underlying probability distribution in some metric space of measures. The general results are applied to the stability analysis of stationary points of classical onestage and twostage stochastic programs, twostage stochastic programs with equilibrium constraints and dominance constrained stochastic programs.
(The talk reports about joint work with Yongchao Liu and Huifu Xu.)
« Back... Stochastic programming Werner Römisch, HumboldtUniversität zu Berlin, Germany
We introduce to modeling aspects of optimization models with stochastic uncertainty and review basic model types in stochastic optimization:
(i) chance constrained stochastic programs,
(ii) twostage stochastic programs,
(iii) information constraints and multistage stochastic programs,
(iv) two and multistage mixedinteger stochastic programs,
(v) riskaverse models based on risk functionals,
(vi) models with stochastic dominance constraints,
(vii) models with equilibrium constraints.
We also review basic properties of such optimization models and discuss optimality conditions and duality. Most models are illustrated by practical examples from energy, finance, transportation etc. We introduce to stability and approximation of such models as a prerequisite of their numerical solution. In the sequel we discuss methods for designing discrete approximations via scenario generation based on Monte Carlo sampling and highdimensional numerical integration. We discuss structures of the resulting optimization models and how they can be exploited by decomposition methods, in perticular, for two and multistage (mixedinteger) models. Such decomposition schemes and solution methods for chance constrained methods are also shortly discussed. Open questions and future research directions are outlined and irecent monographs and paper collections reflecting the stateoftheart of the field are mentioned.
« Back... Dynamic riskaverse optimization for Markov models Andrzej Ruszczynski, Rutgers, The State University of New Jersey, USA
We present the concept of a dynamic risk measure and discuss its important properties. In particular, we focus on timeconsistency of risk measures. Next, we focus on dynamic optimization problems for Markov models. We introduce the concept of a Markov risk measure and we use it to formulate riskaverse control problems for two Markov decision models: a discounted infinite horizon model and an undiscounted transient model. For both models we derive riskaverse dynamic programming equations and a value iteration method. We also develop a riskaverse policy iteration method and we prove its convergence. We propose a version of the Newton method to solve a nonsmooth equation arising in the policy iteration method and we prove its global convergence. Finally, we present some examples.
« Back... Nonconvex cognitive radio games Gesualdo Scutari, State University of New York at Buffalo, USA
We propose a gametheoretic approach to design a cognitive radio (CR) system composed of finitely many primary users (PUs) and secondary users (SUs), wherein each SU (player) competes against the others to maximize his own opportunistic throughput by choosing jointly the sensing duration, the detection thresholds, and the power allocation over a multichannel link. The game contains constraints on the transmit power (and possibly spectral masks) as well as the maximum individual/aggregate (probabilistic) average interference tolerable by the PUs. To keep the optimization as decentralized as possible, global interference constraints are imposed via pricing. The individual players' optimization problems are nonconvex and there are price clearance conditions associated with the nonconvex global interference constraints to be satisfied by the Nash equilibria of the game. Using variational inequality theory, we establish the existence of an aggregated firstorder solution of the game, which we call a QuasiNash Equilibrium (QNE), and relate such a QNE to a local Nash Equilibrium. We further establish the existence of a Nash equilibrium solution under a certain smallness restriction on the interference coefficients between the PUs and SUs. Lastly, we present provably convergent distributed algorithms for computing such equilibria along with their convergence. Numerical results show that the proposed gametheoretic approach of joint sensing and power allocation yields considerable performance improvement with respect to current centralized and decentralized designs of CR systems. This is a joint work with JongShi Pang at UIUC.
« Back... Stochastic and hierarchical equilibrium problems in power markets Vinayak (Uday) V. Shanbhag, Pennsylvania State University, USA
Electricity market models are assuming increasing relevance in the context of market and policy design. Traditionally much of these models were developed in purely static and deterministic settings. However, there is a growing need to incorporate hierarchy and uncertainty into these models, complications that introduce significant challenges, both from an analytical and a computational standpoint. Motivated by these concerns, we provide an outline of the main models in such settings and discuss the relevant analytical and computational questions. This tutorial has three parts, the first focusing on deterministic models, while the second and third parts consider stochastic and hierarchical generalizations. In each part, we develop the relevant models and discuss the analysis (existence, uniqueness of equilibria) and the computation of equilibria. This is joint work with JongShi Pang and Dane Schiro, both at the University of Illinois at UrbanaChampaign.
« Back... Stochastic variational inequality problems: tractable sufficiency conditions and stochastic approximation schemes Vinayak (Uday) V. Shanbhag, Pennsylvania State University, USA
We consider the stochastic variational inequality problem in which the mappings contain expectations over a possibly general measure space and associated sets may be unbounded. In this talk, we consider two sets of questions. First, we provide tractable verifiable conditions for showing existence that do not necessitate integration. Important such conditions are provided for quasivariational inequalities and complementarity problems and can further accommodate multivalued maps. Second, we discuss a stochastic approximation scheme for monotone stochastic variational inequalities in which steplength choices are chosen in accordance with problem parameters (Lipschitz constant, monotonicity constant etc.). It can be shown that such rules can be derived in networked regimes without a global coordinator. Since, maps may not admit Lipschitzian properties (or such constants may be difficult to estimate), we consider a local randomization scheme which allows for deriving a Lipschitz constant for a "smoothed" map and show how such techniques can be integrated within stochastic approximation. We present some preliminary numerical results obtained from applying such schemes to stochastic NashCournot games.
The first part of this talk is joint work with Uma Ravat at UIUC while the second part is based on joint work with Angelia Nedich and Farzad Yousefian, both at UIUC.
« Back... Uniform properties in certain complementarity problems and complementarity systems Jinglai Shen, University of Maryland Baltimore County, USA
In this talk, we discuss some uniform properties of structured complementarity problems and several classes of complementarity systems arising from error and uncertainty analysis in statistics and switching systems, respectively. The first uniform property pertains to Bspline estimation of a convex function in statistics. The optimal spline coefficients of this estimator give rise to a family of size varying, Lipschitz piecewise linear functions. In order to show the uniform convergence of the estimator, we establish a uniform Lipschitz property in the infinity norm. Its implications in asymptotic analysis of the convex Bspline estimator are revealed. The second uniform property is relevant to robust nonZenoness of switching Lipschitz piecewise affine systems (PASs) and the related linear complementarity systems. In particular, we show that there is a uniform bound on the number of mode switchings for a family of Lispchitz PASs as along as their subsystem matrices are uniformly bounded. Some applications and open problems are discussed.
« Back... Aspirational preferences and their representation by risk measures Melvyn Sim, National University of Singapore
We consider choice over uncertain, monetary payoffs and study a general class of preferences. These preferences favor diversification, except perhaps on a subset of sufficiently disliked acts, over which concentration is instead preferred. This structure encompasses a number of known models (e.g., expected utility and several variants under a concave utility function). We show that such preferences share a representation in terms of a family of measures of risk and targets. Specifically, the choice function is equivalent to selection of a maximum index level such that the risk of beating the target at that level is acceptable. This representation may help to uncover new models of choice. One that we explore in detail is the special case when the targets are bounded. This case corresponds to a type of satisficing and has descriptive relevance. Moreover, the model is amenable to largescale optimization. This is a joint work with David Brown and Enrico De Giogi.
« Back... Approximating Lpnormconstrained polynomial optimization problems via the algorithmic theory of convex bodies Anthony ManCho So, The Chinese University of Hong Kong, Hong Kong
In recent years, Lpnormconstrained polynomial optimization has found applications in many different areas, including spectral theory of tensors, signal processing, data analysis and quantum physics. Given their generality, Lpnormconstrained polynomial optimization problems are typically intractable, which leads to the question of their approximability. In this talk, we will discuss the close connection between Lpnormconstrained polynomial optimization and the algorithmic theory of convex bodies. Then, we will demonstrate how techniques from the latter can be used to prove the bestknowntodate approximation results for the former.
« Back... On safe tractable approximations of chance constrained linear matrix inequalities with partly dependent perturbations Anthony ManCho So, The Chinese University of Hong Kong, Hong Kong
In recent years, there has been significant progress in using convex optimization techniques to deal with uncertainties in the data of an optimization problem. One of the successes is the development of socalled safe tractable approximations of chance constrained programs, where a probabilistic constraint is replaced by an efficiently computable convex constraint whose validity implies the validity of the former. Currently, such an approach mainly applies to problems where the data perturbations are independent. However, in some applications, the data perturbations are not independent, and so existing results cannot be applied. In this talk, we will demonstrate how tools from probability theory can be used to develop safe tractable approximations of chance constrained programs with dependent data perturbations. An advantage of our approach is that the resulting safe tractable approximations can be formulated as SDPs or even SOCPs, thus allowing them to be solved easily by offtheshelf solvers. If time permits, we will also discuss some applications of our approach.
« Back... Estimating dynamic discretechoice games of incomplete information Chelin Su, The University of Chicago Booth School of Business, USA
We investigate the estimation of models of dynamic discretechoice games of incomplete information, formulating the maximumlikelihood estimation exercise as a constrained optimization problem which can be solved using stateoftheart constrained optimization solvers. Under the assumption that only one equilibrium is played in the data, our approach avoids repeatedly solving the dynamic game or finding all equilibria for each candidate vector of the structural parameters. We conduct Monte Carlo experiments to investigate the numerical performance and finitesample properties of the constrained optimization approach for computing the maximumlikelihood estimator, the twostep pseudo maximumlikelihood estimator and the nested pseudolikelihood estimator, implemented by both the nested pseudolikelihood algorithm and a modified nested pseudolikelihood algorithm.
This is joint work with Michael Egesdal and Zhenyu Lai at Harvard University
« Back... The MoreauYosida regularization of the Ky Fan knorm related functions Defeng Sun, National University of Singapore
Matrix optimization problems (MOPs) involving the Ky Fan $k$norm arise frequently in many applications. In order to design algorithms to solve large scale MOPs with the Ky Fan $k$norm, we need to understand the first and second order properties of the MoreauYosida regularization of the Ky Fan $k$norm function and the indicator function of the Ky Fan $k$norm epigraph. As an initial step, we first study the counterparts of the vector $k$norm related functions, including the metric projectors over the dual vector $k$norm ball and the vector $k$norm epigraph, and the directional derivatives and Fr\'{e}chet differentiability of these metric projectors. Then we use these results to study the corresponding properties of the MoreauYosida regularization of the Ky Fan $k$norm function and the indicator function of the Ky Fan $k$norm epigraph.
« Back... A rankcorrected procedure for matrix completion with fixed basis coefficients Defeng Sun, National University of Singapore
In this talk, we address lowrank matrix completion problems with fixed basis coefficients, which include the lowrank correlation matrix completion in various fields such as the financial market and the lowrank density matrix completion from the quantum state tomography. For this class of problems, the efficiency of the common nuclear norm penalized estimator for recovery may be challenged. Here, with a reasonable initial estimator, we propose a rankcorrected procedure to generate an estimator of high accuracy and low rank. For this new estimator, we establish a nonasymptotic recovery error bound and analyze the impact of adding the rankcorrection term on improving the recoverability. We also provide necessary and sufficient conditions for rank consistency in the sense of Bach (2008), in which the concept of constraint nondegeneracy in matrix optimization plays an important role. As a byproduct, our results provide a theoretical foundation for the majorized penalty method of Gao and Sun (2010) and Gao (2010) for structured lowrank matrix optimization problems. [This is a joint work with Weimin Miao and Shaohua Pan].
« Back... An inexact accelerated proximal gradient method for large scale linearly constrained convex SDP KimChuan Toh, National University of Singapore
The accelerated proximal gradient (APG) method, first proposed by Nesterov for minimizing smooth convex functions, and later extended by Beck and Teboulle to composite convex objective functions, and studied in a unifying manner by Tseng has proven to be highly efficient in solving some classes of large scale structured convex optimization (possibly nonsmooth) problems, including nuclear norm minimization problems in matrix completion and $l_1$ minimization problems in compressed sensing. The method has superior worstcase iteration complexity over the classical projected gradient method, and usually has good practical performance on problems with appropriate structures. In this paper, we extend the APG method to the inexact setting where the subproblem in each iteration is only solved approximately, and show that it enjoys the same worstcase iteration complexity as the exact counterpart if the subproblems are progressively solved to sufficient accuracy. We apply our inexact APG method to solve large scale convex quadratic semidefinite programming (QSDP) problems. The subproblem in each iteration is solved by a semismooth NewtonCG (SSNCG) method with warmstart using the iterate from the previous iteration. Our APGSSNCG method is demonstrated to be efficient for QSDP problems whose positive semidefinite linear maps are highly illconditioned or rank deficient.
« Back... Optimization of deep drawing processes Stefan Ulbrich, Technische Universität Darmstadt, Germany
Hydroforming of sheet metals involves elastoplasticity and frictional contact, which can be modeled by an evolutionary quasivariational inequality (EVI). The aim is to control the blank holder force and the fluid pressure in such a way that no damages occur during the forming process and that the final shape of the sheet metal product coincides with the desired geometry. The resulting optimization problem is a challenging infinite dimensional MPEC, which leads after discretization to very large scale problems. We consider the modeling of the optimization problem, a semismooth FEdiscretization of the EVI and discuss model reduction techniques to reduce the complexity of the discretized EVI. Moreover, we show optimization results for a 3D hydroforming process based on the full FEdiscretization, evaluate the accuracy of the reduced order model and discuss a strategy towards an efficient reduced order model based optimization approach.
« Back... A globalized semismooth Newton method for optimization problems involving nonsmooth functions Michael Ulbrich, Technische Universität München, Germany
We propose a class of globalized semismooth Newton methods for solving optimization problems involving smooth nonconvex and nonsmooth convex terms in the cost function. Our focus is on optimization problems, but the principle idea is also applicable to hemivariational inequalities and to monotone inclusion problems. A proxtype fixed point equation representing the stationarity conditions forms the basis of the approach. Recently, first order methods derived from this, like variants of the proximal gradient method, have been intensively investigated in a variety of contexts, e.g., for (tree / group) sparsity and low rank regularizations. In many important sitations, the prox operator can be shown to be semismooth with tractably computable generalized derivativevector products. The method we investigate uses semismooth Newton steps for the fixed point equation to enhance an underlying basic globally convergent method, such as a proximal gradient scheme. The acceptance of semismooth Newton steps is controlled by a filter. We show that the resulting method is globally convergent and achieves a locally qsuperlinear rate under suitable conditions. It can be implemented in a matrixfree fashion and allows for regularized, inexact solutions of the Newton system by Krylov methods. The talk concludes with numerical results.
« Back... On the polynomial complexity of exact recovery Stephen Vavasis, University of Waterloo, Canada
In recent years it has been shown that convex programming is able to solve discrete NPhard data mining problems exactly in polynomial time in the case when the input instance is generated according to a certain specified model. Notable results in this direction include compressive sensing, matrix rank minimization, matrix completion, and ranksparsity decomposition. However, underlying most of these results is the assumption that the exact solution to the convex program is available, while in many cases it is possible to obtain only an approximate solution. We consider a new termination test for a convex relaxation of a problem related to nonnegative matrix factorization. This termination test provides an a posteriori proof that the original discrete problem is solved. Moreover, it also leads to an a priori analysis showing that the exact solution to the original problem is obtainable from an approximate solution to the convex program in the case that the original instance is generated according to the specified model.
Parts of this talk represent joint work with KimChuan Toh and Xuan Vinh Doan.
« Back... A perturbed sums of squares theorem for polynomial optimization and its applications Hayato Waki, Kyushu University, Japan
We prove a property of positive polynomials on a compact set with a small perturbation. When applied to a Polynomial Optimization Problem (POP), the property implies that the optimal value of the corresponding Semi Definite Programming (SDP) relaxation with sufficiently large relaxation order is bounded from below by $(f^*\epsilon)$ and from above by $f^*+\epsilon(n+1)$, where $f^*$ is the optimal value of the POP. We propose new SDP relaxations for POP based on modifications of existing sumsofsquares representation theorems. An advantage of our SDP relaxations is that they are of considerably smaller dimension than those originally proposed by Lasserre. We present some applications and the results of our computational experiments. This is the joint work with Masakazu Muramatsu and Levent Tuncel.
« Back... Stochastic equilibria Roger Wets, University of California, USA
These lecture will be devoted to algorithmic procedures that show promise to actually compute (economic) equilibria when the environment is stochastic reporting, in the process, on experimental implementations. It will include, to some extent a review of various proposed schemes to deal with both deterministic and stochastic equilibrium problems.
« Back... About the stability of Walras equilibrium Roger Wets, University of California, USA
Soon after nailing down the existence of equilibrium prices (ArrowDebreu and McKenzie), their stability under various perturbations took center stage. Stability,
might have validated the socalled tâtonnement process (Adam Smith's invisible hand). However, instability with respect to shifts in economic resources, in particular Scarf's examples, brought the perception that equilibrium might actually be too fragile as a concept. This was responsible for a fundamental methodological shift, initiated by Debreu in the 70's and now widely adopted by the microeconomic's community, to differential topological tools to derive (generic) existence and study the properties of equilibrium prices. But even such an approach could at best obtain (local) stability results under modeladjustments that are far less than satisfactory from an economic viewpoint.
New technology coming from Variational Analysis can be brought to bear on these issues abandoning in the process the undesirable modeladjustments. Rather than equations as the basis of equilibrium characterization, variational inequalities can play this role, for which generalizations of the implicit function theorem are now available. The stability results derived by these means suggest the need for a shift in attitude toward the targets of stability analysis as well as in the study of the evolution of an equilibrium in ongoing markets.
« Back... Distributionally robust convex optimization Wolfram Wiesemann, Imperial College London, UK
Distributionally robust optimization is a decisionmaking paradigm that caters for situations in which the probability distribution of the uncertain problem parameters is itself subject to uncertainty. The distribution is then typically assumed to belong to an ambiguity set comprising all distributions that share certain structural or statistical properties. In this talk, we present a unifying framework for modeling and solving distributionally robust optimization problems. We introduce standardized ambiguity sets that contain all distributions with prescribed conic representable confidence sets and with mean values residing on an affine manifold. It turns out that these ambiguity sets are sufficiently expressive to encompass and extend several approaches from the recent literature. They also allow us to accommodate a variety of distributional properties from classical and robust statistics that have not yet been studied in the context of robust optimization. We determine sharp conditions under which distributionally robust optimization problems based on our standardized ambiguity sets are computationally tractable. We also provide tractable conservative approximations for problems that violate these conditions.
This is joint work with Daniel Kuhn and Melvyn Sim.
« Back... Optimization in machine learning Stephen Wright, University of WisconsinMadison, USA
Machine learning provides essential tools for analysis and extraction of meaning from data. As the data deluge gathers pace, the challenges of performing effective and timely analysis become more acute. Researchers in machine learning have turned more and more to optimization as a means for formulating and solving their analysis problems. The demands of the applications are pushing optimization research in new directions, with many interesting contributions coming from machine learning researchers. This talk surveys several aspects of machine learning in which optimization techniques are playing key roles, describes the techniques that have proven useful, and outlines some challenges that remain. We discuss central applications such as speech and language processing; important paradigms such as hidden Markov models, deep learning, and kernel learning; and key optimization techniques such as stochastic gradient, regularization, coordinate descent, and approximate secondorder methods.
« Back... Outlier pursuit: robust PCA and collaborative filtering Huan Xu, National University of Singapore
Principal Component Analysis is one of the most widely used techniques for dimensionality reduction. Nevertheless, it is plagued by sensitivity to outliers; finding robust analogs is critical. In the standard form, PCA involves organizing data into a matrix where columns represent the points, and rows the features. As such, outliers can be modeled as some columns that are completely arbitrary. We propose Outlier Pursuit  a recovery method based on convex optimization, and provide analytical guarantees on when it is able to both (a) recover the lowrank matrix, and (b) identify the outliers. Similar results can be obtained in the more challenging case where on top of having outliers, most entries of the matrix are missing. This can be used in the task of collaborative filtering where some "users" are malicious.
« Back... Protection at all levels: probabilistic envelope constraints Huan Xu, National University of Singapore
Optimization under chance constraints is a standard approach to ensure that bad events such as portfolio losses, are unlikely to happen. They do nothing, however, to protect more against terrible events (e.g., deep portfolio losses, or bankruptcy). In this talk, we will propose a new decision concept, termed "probabilistic envelop constraint", which extends the notion of chance constraints, to a formulation that provides different probabilistic guarantees at each level of constraint violation. Thus, we develop a notion of guarantee across the spectrum of disasters, or rare events, ensuring these levels of protection hold across the curve, literally. We further show that the corresponding optimization problem can be reformulated as a semiinfinite optimization problem, and provide conditions that guarantee its tractability. Interestingly, the resulting formulation is what is known as a comprehensive robust optimization in literature. This work thus provides a new fundamental link between two main methodologies in optimization under uncertainty: stochastic optimization and robust optimization. This is a joint work with Constantine Caramanis (UTAustin) and Shie Mannor (Technion).
« Back... Asymptotic convergence analysis of distributional robust optimization and equilibrium problems Huifu Xu, University of Southampton, UK
We study distributional robust optimization approach for a one stage stochastic minimization problem where the true distribution of the underlying random variables is unknown but it is known to be in a set of probability distributions, and optimal decision is taken on the basis of worst possible distribution from the set. We consider the case when the distributional set is constructed through samples and investigate asymptotic convergence of optimal values and optimal solutions as sample size increases. The analysis provides a unified framework for asymptotic convergence of datadriven problems in the literature of robust optimization and extends the classical asymptotic convergence of stochastic programming. The discussion is extended to a stochastic Nash equilibrium problem where each player takes a robust action on the basis of their subjective expected objective value.
(Joint work with Hailin Sun)
« Back... Warmstarting the homogeneous and selfdual interior point method for linear and conic quadratic problems Yinyu Ye, Stanford University, USA
We present two strategies for warmstarting primaldual interior point methods for the homogeneous selfdual model when applied to mixed linear and quadratic conic optimization problems. Common to both strategies is their use of only the final (optimal) iterate of the initial problem and their negligible computational cost. This is a major advantage when compared to previously suggested strategies that require a pool of iterates from the solution process of the initial problem. Consequently our strategies are better suited for users who use optimization algorithms as blackbox routines which usually only output the final solution. Our two strategies differ in that one assumes knowledge only of the final primal solution while the other assumes the availability of both primal and dual solutions.We analyze the strategies and deduce conditions under which they result in improved theoretical worstcase complexity. We present extensive computational results showing work reductions when warmstarting compared to coldstarting in the range 30%{75% depending on the problem class and magnitude of the problem perturbation. The computational experiments thus substantiate that the warmstarting strategies are useful in practice.
Joint work with Anders Skajaa and Erling Andersen
« Back... On optimization over nonsymmetric cones Akiko Yoshise, University of Tsukuba, Japan
In this talk, we present some recent results on optimization over nonsymmetric cones. Subjects included are: numerical results exhibiting the doubly nonnegative relaxation for the quadratic assignment problem, primal barrier function algorithms for solving the doubly nonnegative optimization problems, a discretization method for testing copositivity of a given matrix and its numerical experiments on the maximum clique problem. These are joint works with Yasuaki Matsukawa and Akihiro Tanaka.
« Back... On the convergence rate of a generalized proximal point algorithm Xiaoming Yuan, Hong Kong Baptist University, Hong Kong
We propose a generalized scheme of the wellknown proximal point algorithm (PPA) for finding a root of a maximal monotone operator, by which a number of benchmark operator splitting methods in PDE and optimization areas can be recovered. We discuss convergence behaviors (e.g., worstcase iteration complexity, sublinear or linear convergence rate) of this generalized PPA scheme under different conditions. This is a joint work with Etienne Corman.
« Back... A scalable strategy for timedependent parametric nonlinear programs Victor Zavala, Argonne National Laboratory, USA
We present an approach for nonlinear programming (NLP) based on the direct minimization of an exact differentiable penalty function using trustregion Newton techniques. As opposed to existing algorithmic approaches to NLP, the approach provides all the features required for scalable implementations in timedependent environments: it can efficiently detect and exploit directions of negative curvature, it is superlinearly convergent, and it enables the early termination of the Newton step computation through iterative linear algebra. In addition, the approach enables fast detection of activity, efficient warmstarting, and progress on a primaldual merit function at every iteration. The approach has applications in timestepping approaches for differential variational inequalities and model predictive control. We present general convergence results and demonstrate its behavior through numerical studies.
« Back... On low rank relaxation and polynomial optimization Shuzhong Zhang, University of Minnesota, USA
One of the fundamental models in tensor/polynomial optimization is to maximize a tensor/polynomial function over (Cartesian product of)/unit Euclidean sphere(s). Applications of such models can be widely found in tensor eigenvalue computation, tensor PCA (Principal Component Analysis), Magnetic Resonance Imaging (MRI), and quantum entanglement, among many others. By classical methods, the global optimal solutions of such models can only be found for small scale instances within reasonable computational time. While approximation methods have been recently developed, the solutions by the approximation algorithms are not guaranteed to be globally optimal. In this talk we shall propose a new approach to solve this important class of optimization models. By regarding polynomial/tensor optimization as linear optimization with an additional rankone constraint, it is possible to relax the last constraint by working with a nuclear norm penalty, yielding a convex optimization relaxation, as is commonly done in compressive sensing. Attention is then paid to solve the resulting convex model, which may be very large in size, by the first order methods. Numerical experiments show that this approach produces a rankone solution with very high probability regardless the size of the model, in which case an optimal solution to the original problem is found very quickly indeed.
« Back... Robust portfolio selection via learning with mixture model Shushang Zhu, Sun YatSen University, China
In this work, we integrate the robust optimization approach and the mixture distribution modeling technology into portfolio selection. With mixture distribution, we can combine the information from different channels, such as the historical data and the investor subjective views, to predict the asset returns. Bayesian learning approach is adopted to specify the confidence region of estimated parameters of the distribution of asset returns, based on which, the robust portfolio selection problem is formulated. By using duality theorem, we show that the robust portfolio selection problem can be further reformulated as linear program or secondorder cone program, which can be easily solved. We also report the performances of the proposed approach via simulation and empirical study.
« Back...

