|Abstract:||In this thesis, we propose a recurrent FIR neural network, develop a constrained formulation for neural network learning, study an e_cient violation guided backpropagation algorithm for solving the constrained formulation based on the theory of extended saddle points, and apply neural network learning for predicting both noise-free time series and noisy time series. The recurrent FIR neural-network architecture combines a recurrent structure and a memory-based FIR structure in order to provide a more powerful modeling ability. The constrained formulation for neural network learning incorporates the error of each learning pattern as a constraint, a new cross-validation scheme that allows multiple validations sets to be considered in learning, and new constraints that can be expressed in a procedure form. The violation-guided back propagation algorithm first transforms the constrained formulation into an l1-penalty function, and searches for a saddle point of the penalty function.
When using a constrained formulation along with violation guided backpropagation to neural network learning for near noiseless time-series benchmarks, we achieve much improved prediction performance as compared to that of previous work, while using less parameters. For noisy time-series, such as financial time series, we have studied systematically trade-offs between denoising and information preservation, and have proposed three preprocessing techniques for time-series with high-frequency noise. In particular, we have proposed a novel approach by first decomposing a noisy time series into different frequency channels and by preprocessing each channel adaptively according to its level of noise. We incorporate constraints on predicting low-pass data in the lag period when a low-pass filter is employed to denoise the band. The new constraints enable active trainiii ing in the lag period that greatly improves the prediction accuracy in the lag period. Extensive prediction experiments on financial time series have been conducted to exploit the modeling ability of neural networks, and promising results have been obtained.