|Abstract:||In this dissertation, we propose a general approach that can significantly reduce the complexity in solving discrete, continuous, and mixed constrained nonlinear optimization (NLP) problems. A key observation we have made is that most application-based NLPs have structured arrangements of constraints. For example, constraints in AI planning are often localized into coherent groups based on their corresponding subgoals. In engineering design problems, such as the design of a power plant, most constraints exhibit a spatial structure based on the layout of the physical components. In optimal control applications, constraints are localized by stages or time.
We have developed techniques to exploit these constraint structures by partitioning the constraints into subproblems related by global constraints. Constraint partitioning leads to much relaxed subproblems that are significantly easier to solve. However, there exist global constraints relating multiple subproblems that must be resolved. Previous methods cannot exploit such structures using constraint partitioning because they cannot resolve inconsistent global constraints efficiently.
We have developed a mathematical theory that provides strong necessary and sufficient analytical conditions for limiting the subspace to be searched when resolving the global constraints. We have proposed the theory of extended saddle points (ESP) to support constraint partitioning when solving NLPs. Based on a novel penalty formulation, ESP offers a necessary and sufficient condition for constrained local optima of NLPs in discrete, continuous, and mixed spaces. It facilitates constraint partitioning by providing a set of necessary conditions, one for each subproblem, to characterize the local optima. It further reduces the complexity by defining a much smaller search space in each subproblem for backtracking. Since resolving the global constraints only incurs a small amount of overhead, our approach leads to a significant reduction of complexity.