|Abstract:||Modern engineering challenges motivate a transition from conventional systems that rely on measurements of physical quantities to systems that interpret and respond to subjective evaluations of the world. Different from engineering problems that have quantifiable objectives, such as controlling a system based on noisy measurements or transmitting information through a medium, sources that provide subjective information, which we will refer to as ``experts'', evaluate the world based on a potentially hidden rationale. Learning, inference, and decision making based on subjective evaluations, or opinions, are not only common aspects of human learning but are fundamental engineering challenges due to the hidden uncertainty. The objective of this work is to establish fundamentals of learning from opinions by addressing key problems that rely on subjective information with hidden models. Specifically, Chapter 2 focuses on sequential consultation of experts, Chapter 3 investigates statistical methods for opinion aggregation, Chapter 4 addresses fidelity-based error detection and mitigation, and Chapter 5 studies the impact of high-dimensional uncertainty on networks.
Contextually, an opinion has associated costs. Consulting an expert incurs among others, time, resource, and opportunity costs. Particularly in engineering systems, such costs further manifest as circuit-area, system-complexity, runtime, or memory requirements. The conventional decision-making framework with pre-allocated resources might not necessarily capture the trade-off between the utility to be gained by consulting an expert and the associated costs. Sequential consultation of experts arises naturally in this context, where the objective is to decide whether to consult another expert or to make a decision based on the opinions received up to that time. In this context, the true utility of consulting another expert does not only depend on the cost associated with consulting or the individual expertise, but it also depends on the instantaneous decision strength based on the statistics hitherto. A fundamental challenge is to find a sequential strategy that addresses this trade-off. In Chapter 2, we show that the strategy achieving maximum expected reward is in the form of a sequential likelihood ratio test, where a unique threshold function depends on the cost-performance trade-off of all future experts to be consulted.
Reliable mathematical models for experts might be difficult to obtain or quantify, even in some cases impossible, due to the inherent subjectivity of a task, limited insight that training might yield for real-world encounters, or due to massively high-dimensional space from which an expert might build a rationale for decision making. However, the difficulty of modeling does not necessarily render statistical inference implausible. It is often reasonable to accept experts as honest but fallible sources of information that do not purposefully deceive the decision-maker. Populations comprising such experts are less subjective than their individual constituents and a natural understanding of correctness arises: When objective truth is not achievable, one might choose to accept the consensus of opinions as truth to the best of one's knowledge. This leads to an alternative notion of reliability, termed ``pseudo competence'', which in turn allows reliable statistical inference. In Chapter 3, we show that pseudo competencies can be estimated empirically on test data by centralized computation or they can be estimated in distribution on strongly connected networks. We further show that opinion aggregation mechanisms that use pseudo competencies can, in some cases, achieve performance comparable to decision rules that have reliable models for experts.
Experts as error-prone computational units are often subject to an unknown, or high-dimensional, failure mechanisms. However, the robustness of a computational unit can be inferred relatively reliably from the corresponding system complexity, motivating fidelity-based safe-guarding mechanisms against what is often called, ``black swan'' events; failures that happen with low probability yet have a high impact on the system. A method for jointly testing for failure and bypassing erroneous outcomes, called algorithmic noise tolerance, uses computational units that are robust yet of lower fidelity to safeguard the system against high-impact errors from high-fidelity computational units, without requiring exact models for operation or failure. In Chapter 4, we propose model-independent design principles for algorithmic noise tolerance and address fundamental limits of distributed error bypassing.
Networks comprising stochastic components are a consequence of the uncertainty inherent to the embedding and integration of systems into physical realizations and substrates. Due to the massive dimensionality of assembly, fabrication, and integration processes, stochastic modeling of such uncertainty can be prohibitive and current methods are exceedingly conservative, often leading to massive over-design. In Chapter 5, we investigate concentration properties of certain network quantities for linear resistive networks for topology-preserving uncertainty profiles without relying on exact mathematical models for componentwise or network uncertainty. Furthermore, we quantify the effects of Johnson-Nyquist noise and address inter-component dependence due to the integration processes.