Dissertations and Theses - Industrial and Enterprise Systems Engineering
http://hdl.handle.net/2142/16359
Game theoretic formulations and solution methods for microeconomic market models
http://hdl.handle.net/2142/46757
Game theoretic formulations and solution methods for microeconomic market models
Schiro, Dane
Microeconomic equilibrium problems are intimately related to game theory, but the current state of knowledge for several types of microeconomic problems is limited. This dissertation rigorously addresses several of these problems from the standpoint of variational inequality theory. In addition to proving theoretical results about problem properties, important microeconomic implications are discussed when applicable. The individual chapters of this work focus on:
• equilibrium existence for perfectly competitive capacity expansion problems with risk-averse players;
• a comprehensive analysis of the application of Lemke’s method to affine generalized Nash
equilibrium problems;
• the formulation and study of a unified power market model encompassing different microeconomic behavioral assumptions, capacity markets, emission permit auctions, and consumer surplus maximization;
• an investigation of differential Nash games with mixed state-control constraints.
Although the presented results are applicable to a broad class of games that satisfy certain structural properties, electricity markets represent a key application area underlying
several of the chapters and are specifically addressed in Chapter 4. As a whole, this dissertation should clearly illustrate how variational inequality theory can be used to analyze these applications and should provide the basis for the development of effective solution methodologies. It should also provide deeper insights into complex microeconomic games that cannot be described solely with demand and supply curves. Suggestions for additional research are provided at the end of each chapter to motivate future work in the respective domains.
Game theory
variational inequalities
microeconomics
power markets
optimal control
Thu, 16 Jan 2014 18:01:25 GMTMulti-objective control and coordination of multi-agent systems
http://hdl.handle.net/2142/46676
Multi-objective control and coordination of multi-agent systems
Valicka, Christopher
In this dissertation, control methodologies for systems of multiple mobile
agents facing multiple objectives are considered. Carefully constructed objective
functions for agents interested in any or all of collision avoidance,
robust communications, waypoint following, and dynamic coverage objectives
are presented. Theory relevant to gradient-like control laws designed
using over-approximations of the maximum function is studied and applications
of developed controllers in simulations and physical experiments are
presented.
Agent behavior resulting from and relevant to the various objectives is
discussed. Amongst the various objective functions, novel formulations for
proximity and global coverage objectives are presented. A novel control law
design is developed based on copula structures from multiattribute decision
theory. Systematic and intuitive methods for determining the tradeoffs between
multiple objectives are presented and applied to the design of multi-agent
control laws.
Using over-approximations of the maximum function, appropriate definitions
are presented to develop the conditions required for an agent to accomplish
multiple objectives. The proposed control laws' suitability for and
ability to be implemented straightforwardly on various multi-agent systems
facing multiple objectives are demonstrated in multiple simulation and experimental
results. The proposed copula based control law and global coverage
approach are discussed and validated through their combined implementation
in a multi-robot testbed.
Multiattribute utility copula
multi-objective control
multi-agent system
multi-robot testbed
dynamic coverage control
Thu, 16 Jan 2014 17:58:45 GMTPerformance guarantees for deadline-driven MapReduce jobs under failure
http://hdl.handle.net/2142/45663
Performance guarantees for deadline-driven MapReduce jobs under failure
Faghri, Faraz
Increasingly, large systems and data centers are being built in a 'scale out' manner, i.e. using large numbers of commodity hardware components, instead of traditional 'scale up' using expensive, specialized equipment. However, large numbers of commodity components imply higher rates of failure across such systems. Such failures can cause applications to miss their deadlines for task completion. For this reason, cloud service providers and cloud applications must anticipate failures and engineer their services accordingly.
In this thesis, we first analyze the availability of a commodity data center designed for MapReduce applications. MapReduce is increasingly used in industry for efficient large scale data processing tasks including personal advertising, spam detection, as well as data mining. We show how MapReduce software level fault tolerance can be used to achieve the same availability as scale up data centers. Second, we extend existing job schedulers for deadline-driven jobs to handle machine and software failures and satisfy the service level objectives.
Performance of systems
Service Level Objectives
Fault tolerance
Cloud computing
Hadoop
MapReduce
Thu, 22 Aug 2013 16:57:06 GMTDecision analysis methods to handle uncertainties that impact product end-of-life recovery
http://hdl.handle.net/2142/45562
Decision analysis methods to handle uncertainties that impact product end-of-life recovery
Behdad, Sara
Environmental protection legislation, consumer interest in “green” products, a trend towards corporate responsibility and recognition of the potential profitability of salvaging operations have resulted in increased interest in product take-back. However, the cost-effectiveness of product take-back operations is hampered by many factors, including the high cost of disassembly, a widely varying feedstock of dissimilar products and uncertainty in the quantity and variability in the quality of received products. These uncertainties make current product take-back systems unprofitable. Two types of decisions must be made; how to carry out the disassembly process in the most efficient manner to “mine” the value-added that is still embedded in the product, and then how to best utilize that value-added once it is recovered.
The variation in the quality of End-of-Life (EOL) products the remanufacturers receive may be reduced through offering financial incentives to those who return products with a specified quality level. Although the variability in the quality of received products can be reduced applying those techniques, firms seeking to rely on recovered products as a key ingredient to manufacturing still face uncertainty surrounding the number of returned products [1] as well as the uncertainties in disassembly operations.
The aim of this research is to help product recovery facilities handle uncertainties that mainly affect the profitability of the EOL product recovery operations. EOL product recovery includes several activities: collecting EOL products; determining the potential for the product’s reuse/upgrade, disassembling the product, and recovering the valuable components. Uncertainties exist in all of those activities in different forms and types. Several new, normative engineering decision systems including some mathematical models for decision making under uncertainty have been created in this dissertation to decrease those uncertainties and to reduce the detrimental effect of unavoidable uncertainty. The methods developed in this research can make product take-back systems more profitable and facilitate the product recovery activities including disassembly planning and EOL decision making.
The work has been expanded to focus on the design stage rather than EOL stage and consider how designer’s decision at the early stage of the design influences the amount, quality and timing of EOL products arrival at the waste stream. Design for EOL management requires considering uncertainties in the design parameters as well as the uncertainties involved during the EOL recovery activities such as disassembly time and probability of damage during disassembly. Moreover, it has been shown how EOL activities including disassembly sequence planning can be investigated at the conceptual design stage applying Immersive Computing Technology (ICT). The ICT capabilities have been used to gather the required information for mathematical models that include consideration of the potential of product damage during disassembly. In addition, the ICT has been employed to show how the optimal disassembly sequence could sometimes be counterintuitive even to those with experience and expertise in disassembly procedures.
Applying the ICT in disassembly sequence planning has provided a basis for developing a framework for achieving synergy between normative and descriptive approaches to design theory and methodology, with the goal of exploiting the strengths and remedying the weaknesses of each approach. By focusing on the product design stage and using virtual reality techniques, we could support designers with information which aids them in choosing product designs that are more suitable for disassembly and end of life recovery.
Product End-of-Life Recovery
Disassembly Sequence Planning
Remanufacturing
Design for End-of-Life Recovery
Sustainable Design and Manufacturing
Thu, 22 Aug 2013 16:47:54 GMTStochastic approximation schemes for stochastic optimization and variational problems: adaptive steplengths, smoothing, and regularization
http://hdl.handle.net/2142/45385
Stochastic approximation schemes for stochastic optimization and variational problems: adaptive steplengths, smoothing, and regularization
Yousefian, Seyed Farzad
Stochastic approximation (SA) methods, first proposed by Robbins and Monro in 1951 for root- finding problems,
have been widely used in the literature to solve problems arising from stochastic convex optimization,
stochastic Nash games and more recently stochastic variational inequalities. Several challenges arise in the
development of SA schemes. First, little guidance is provided on the choice of the steplength sequence.
Second, most variants of these schemes in optimization require differentiability of the objective function and
Lipschitz continuity of the gradient. Finally, strong convexity of the objective function is another requirement
that is a strong assumption to hold. Motivated by these challenges, this thesis focuses on studying
research challenges related to the SA methods in three different areas: (i) steplengths, (ii) smoothing, and
(iii) regularization.
The first part of this thesis pertains to solving strongly convex differentiable stochastic optimization
problems using SA methods. The performance of standard SA implementations can vary significantly based
on the choice of the steplength sequence, and in general, little guidance is provided about good choices.
Motivated by this gap, we present two adaptive steplength schemes equipped with convergence theory that
aim to overcome some of the reliance on user-specefi c parameters. Of these, the first scheme, referred to
as a recursive steplength stochastic approximation (RSA) scheme, minimizes the error bounds to derive a
rule that expresses the steplength at a given iteration as a simple function of the steplength at the previous
iteration and certain problem parameters. The second scheme, termed as a cascading steplength stochastic
approximation (CSA) scheme, maintains the steplength sequence as a piecewise-constant decreasing function
with the reduction in the steplength occurring when a suitable error threshold is met. We then allow for
nondiff erentiable objectives but with bounded subgradients over a certain domain. In such a regime, we
propose a local smoothing technique, based on random local perturbations of the objective function that
leads to a differentiable approximation of the function and a Lipschitzian property for the gradient of
the approximation. This facilitates the development of an adaptive steplength stochastic approximation
framework, which now requires sampling in the product space of the original measure and the artifi cally
introduced distribution. Motivated by problems arising in decentralized control problems and non-cooperative Nash games, in
the second part of this thesis, we consider a class of strongly monotone Cartesian variational inequality
problems, where the mappings either contain expectations or their evaluations are corrupted by error. Such
complications are captured under the umbrella of Cartesian stochastic variational inequality (CSVI) problems
and we consider solving such problems via SA schemes. Spece fically, along similar directions to the RSA
scheme, a stepsize rule is constructed for strongly monotone stochastic variational inequality problems. The
proposed scheme is seen to produce sequences that are guaranteed to converge almost surely to the unique
solution of the problem. To cope with networked multi-agent generalizations, we provide requirements under
which independently chosen steplength rules still possess desirable almost-sure convergence properties. To
address non-smoothness, we consider a regime where Lipschitz constants on the map are either unavailable or
di fficult to derive. Here, we generalize the aforementioned smoothing scheme for deriving an approximation
of the original mapping, which is then shown to be Lipschitz continuous with a prescribed constant. Using
this technique, we introduce a locally randomized SA algorithm and provide almost sure convergence theory
for the resulting sequence of iterates to an approximate solution of the original CSVI problem.
In the third part of this thesis, we consider a stochastic variational inequality (SVI) problem with a
continuous and monotone mapping over a compact and convex set. Traditionally, stochastic approximation
schemes for SVIs have relied on strong monotonicity and Lipschitzian properties of the underlying map. We
present a regularized smoothed SA (RSSA) scheme wherein stepsize, smoothing, and regularization parameters
are updated after every iteration. Under suitable assumptions on the sequences, we show that the
algorithm generates iterates that converge to a solution the SVI problem in an almost-sure sense. Additionally,
we provide rate estimates that relate iterates to their counterparts derived from the Tikhonov trajectory
associated with a deterministic problem.
Stochastic approximation methods
Stochastic optimization
Stochastic variational inequalities
Game theory
Thu, 22 Aug 2013 16:38:34 GMTRandom projection methods for stochastic convex minimization
http://hdl.handle.net/2142/45381
Random projection methods for stochastic convex minimization
Tursun, Umit
The first focus of this thesis is to solve a stochastic convex minimization problem over an arbitrary family of nonempty, closed and convex sets. The problem has random features. Gradient or subgradient of objective function carries stochastic errors. Number of constraint sets can be extensive or infinitely many. Constraint sets might not be known apriori yet revealed through random realizations or randomly chosen from a collection of constraint sets throughout the horizon as in online learning concept.
The traditional projection algorithms for solving minimization problems require projecting over complete collection of constraints at once or over a subset of them based on a predefined selection rule. But in practical applications either all of the constraints might not be known apriori or even if they are known projecting on the intersection set might be computationally prohibitive. We propose a two step gradient/subgradient iterative
method with random projections. As the first step, a random gradient /subgradient projection is performed before observing the random constraint set realization. After taking random gradient /subgradient projection step we reach an intermittent point, which we obtained without considering the feasibility violation. Once the set realization is revealed or chosen within collection of constraint sets, the feasibility violation of intermittent point is corrected. We showed that projecting onto a random subcollection of them using our algorithm with diminishing stepsize is sufficient to converge to the solution set almost surely. Also the convergence of the algorithm for constant and nondiminishing nonsummable stepsizes are proved within an error bound. As the first set of experiments we tested the performance of the algorithm over a dynamic control system. We study three versions of the problem with correlated unknown-but-bounded additive noise, uncorrelated unknown-but-bounded additive noise and uncorrelated bounded output and peak input additive noise under fully known system description cases. It is essentially a robust least squares estimation problem where we recover state parameters from corrupted input and output data. We reformulated the linear least squares estimation problem as a stochastic convex minimization problem and then used the two step random projection algorithm to solve it. Although the problem has infinite number of constraints due to each realization of error term within bounded set, the algorithm goes through a finite subset of them and converges to the solution set. We also prove the existence of solution and provide equivalent minimization formulations or upper bound for these three types of robust least squares problems. We used standard subgradient algorithm to gauge the performance of our method. The implementation results are comparable to the ones found in literature.
Our next focus is to solve a stochastic convex feasibility problem. We explored an algorithmic approach to solve both consistent and inconsistent convex feasibility problems for closed convex uncertain sets. We concentrated our attention on uncertain nature of sets and finding a feasible point using a random subcollection of them. The sets we consider might carry uncertainty due to inaccurate or imprecise spatial, spectral, stochastic information and confidence levels. For this objective we consider a stochastic optimization problem of minimizing an expected weighted proximity function over a collection of closed, convex sets. We show that the proposed algorithm converges to a point in the solution set when solution set is nonempty. In case of inconsistent feasibility problem i.e. the intersection of closed convex constraint sets being empty the algorithm minimizes the proximity function. The projection onto a subcollection of sets approach can be viewed as somewhere between random implementation of alternating projection method and parallel projection method. But our method is not deterministic. It uses random projections onto sets that carry additive bounded noise. Each realization within the bounded disturbances has equal chance of occurence.
The conventional approach of set theoretic estimation problems provide solution that confirm with constraint sets known a priori or observed. But it fails to take into account that sets built on a priori or observed data may carry disturbances or have erroneously predicted statistical information, which may result in inconsistent sets. The attributes of original signal such as amplitude bound, region of support, band-limitedness that are used to built sets in estimation problems may not be accurate. Additionally sets that are built using moments, spectral properties, distribution and bounds information are based on predicted stochastic estimations. The overly conservative confidence bounds or statistical assumptions may cause inconsistencies. Also noise pertubations in measurements or random variations in the impulse response of a system can cause inconsistencies. Our algorithm projects onto a subcollection of sets some of which carry a random realization of noise on it. The implementation results show that the algorithm converges to the solution asymptotically even if the algorithm projects onto a random subcollection of sets at each iteration.
All in all this thesis work presents iterative methods to solve stochastic convex minimization problems and stochastic convex set intersection problems. The almost sure convergence of algorithms are proven. And the performance of them are shown on numerical experiments.
stochastic convex minimization
random projection
stochastic convex feasibility problem
Thu, 22 Aug 2013 16:38:27 GMTNew results on some quadratic programming problems
http://hdl.handle.net/2142/45346
New results on some quadratic programming problems
Yang, Rui
In this thesis we present new effective algorithms for several special classes of quadratic programming problems. The problems we study can be classifiedinto two categories. The first group contains two optimization problems with binary constraints. To solve these problems, we first explore some intrinsic relation between binary quadratic problem and data clustering. Then we utilize the explored relation to develop effective approximation algorithms. For example, the first problem we consider is a special class of binary quadratic problem where the number of nonzero elements is fixed. We use convex quadratic relaxation as a geometric embedding tool to reformulate the underlying BQP as a clustering problem where the target is to find a single cluster of fixed size. A simple 2-approximation algorithm for the clustering problem is proposed. In the second project we study the Binary Matrix Factorization problem(BMF) with additional restriction that the matrix product should be binary and call it constrained binary matrix factorization(CBMF). We propose alternating update procedures for CBMF where we actually solve a binary LP subproblem to update the involved matrix argument. We have both a deterministic 2-approximation and also a randomized approximation algorithm. The deterministic algorithm has a complexity exponential in k, while the randomized algorithm runs in O(kmn) time. The second part of this thesis is about portfolio selection under some (hard) realistic setting. We first considered a new approach for portfolio selection problem with cardinality and thresholds constraints that employs the new technique (based on lp penalty with 0 < p < 1) for finding sparse solutions. The key idea is to interpret the cardinality constraint as a constraint on the sparsity of the solution. This allows us to use the recently developed techniques for sparse solutions in linear and quadratic optimization problems to find a solution that satisfies the cardinality constraint. Numerical experiments indicate that our proposed Hybrid algorithm is fast, and able to provide good approximation solution that has attractive features in financial applications. In the last project we developed an online learning algorithm for quadratic programming problems. Our learning-based algorithm works by constructing a pricing vector from a training problem of previous period and the price vector is used to make decisions sequentially. Under the distribution-free random permutation model and some mild assumptions, we propose a 1 − O(ϵ) learning algorithm for this online problem. The results
can be applied to make better decisions when facing online problems with quadratic objective function.
optimization
quadratic programming
matrix factorization
online algorithm
portfolio selection
Thu, 22 Aug 2013 16:37:19 GMTImplementation of safe and reliable coverage control for multiple mobile robots
http://hdl.handle.net/2142/45334
Implementation of safe and reliable coverage control for multiple mobile robots
Rekoske, Richard
This thesis presents some results relating to an implementation of a safe and reliable coverage control algorithm. The application was focused on implementation on differential drive robots developed at the University of Illinois at Urbana-Champaign. Control laws for coverage, avoidance, and proximity were applied to a multi-agent system of robots. The control laws were merged to provide coverage using the system while guaranteeing inter-agent proximity, inter-agent collision avoidance, and agent-environment collision avoidance. Circular regions were considered for avoidance. The performance and limitation of the application are examined. Practical considerations for implementation are discussed. The experimental platform consisted of a motion capture system, three differential drive robots with multiple sensing capabilities, and two supporting computers for the motion capture system and data visualization.
COVERAGE CONTROL
IMPLEMENTATION
OPTITRACK
MOTION CAPTURE SYSTEM
OMAPL138
AVOIDANCE CONTROL
PROXIMITY CONTROL
KALMAN FILTER
MULTIPLE AGENTS
MULTI-OBJECTIVE
Thu, 22 Aug 2013 16:36:58 GMTLinear and nonlinear semidefinite relaxations of some NP-hard problems
http://hdl.handle.net/2142/45327
Linear and nonlinear semidefinite relaxations of some NP-hard problems
Zhu, Tao
Semidefinite relaxation (SDR) is a powerful tool to estimate bounds and obtain approximate solutions for NP-hard problems. This thesis introduces and studies several novel linear and nonlinear semidefinite relaxation models for some NP-hard problems. We first study the semidefinite relaxation of Quadratic Assignment Problem (QAP) based on matrix splitting. We characterize an optimal subset of all valid matrix splittings and propose a method to find them by solving a tractable auxiliary problem. A new matrix splitting scheme called sum-matrix splitting is also proposed and its numerical performance is evaluated.
We next consider the so-called Worst-case Linear Optimization (WCLO) problem which has applications in systemic risk estimation and stochastic optimization. We show that WCLO is NP-hard and a coarse linear SDR is presented. An iterative procedure is introduced to sequentially refine the coarse SDR model and it is shown that the sequence of refined models converge to a nonlinear semidefinite relaxation (NLSDR) model. We then propose a bisection algorithm to solve the NLSDR in polynomial time. Our preliminary numerical results show that the NLSDR can provide very tight bounds, even the exact global solution, for WCLO.
Motivated by the NLSDR model, we introduce a new class of relaxation called conditionally quasi-convex relaxation (CQCR). The new CQCR model is obtained by augmenting the objective with a special kind of penalty function. The general CQCR model has an undetermined nonnegative parameter $\a$ and the CQCR model with $\a = 0$ (denoted by CQCR(0)) is the strongest of all CQCR models. We next propose an iterative procedure to approximately solve CQCR(0) and a bisection procedure to solve CQCR(0) under some assumption. Preliminary numerical experiments illustrate the proposed algorithms are effective and the CQCR(0) model outperforms classic relaxation models.
Non-deterministic Polynomial-time hard (NP-hard) problems
Semidefinite Relaxation
Quasi-convex Relaxation
Nonlinear Semidefinite Relaxation
Worst-case Linear Optimization
Quadratic Assignment Problem
Thu, 22 Aug 2013 16:36:44 GMTRisk penalties for enhanced reliability in co-optimized markets with uncertain generation
http://hdl.handle.net/2142/45283
Risk penalties for enhanced reliability in co-optimized markets with uncertain generation
Kang, Kyoungwon
With increasing proportion of windpower, an important concern is that of maintaining the reliability of the electric grid in the face of higher supply-side volatility. In this paper, we examine the role of risk-based penalties in developing alternate designs in which firms combine energy bids associated with uncertain real-time availability with stable reserves bids. Such a study is carried out in a regime where firms have access to a day-ahead market, an uncertain real-time energy market and a reserves market. The resulting game-theoretic problem is a two-period stochastic Nash game with risk-based objectives and the associated equilibrium conditions are given by a complementarity problem. Preliminary numerical results on a 6-firm problem provide insights regarding the impact of reserves and risk penalties on wind-based generation, particularly in the face of high variability.
Wind power
Complementarity problem
Nash game
Variational inequality
Risk
Risk penalty
Power market
Thu, 22 Aug 2013 16:34:38 GMT