Files in this item
|(no description provided)|
|Title:||IL(aleph): A unified approach to integrated learning|
|Author(s):||Katz, Bruce Farrell|
|Doctoral Committee Chair(s):||Stepp, Robert E.|
|Department / Program:||Education, Educational Psychology
|Degree Granting Institution:||University of Illinois at Urbana-Champaign|
|Subject(s):||Education, Educational Psychology
|Abstract:||Previous efforts to integrate Explanation-Based Learning (EBL) and Similarity-Based Learning (SBL) have treated these two methods as distinct interactive processes. In contrast, the synthesis presented here views these techniques as emergent properties of a local associative learning rule operating within a neural network architecture. This architecture consists of an input layer, a layer buffering this input, but subject to descending influence from higher order units in the network, one or more hidden units encoding the previous knowledge of the network, and an output decision layer.
SBL is accomplished in the normal manner by training the network with positive and negative examples. Under the appropriate conditions, a single positive example only is required for EBL. Irrelevant features in the input are eliminated by the lack of top-down confirmation, and/or by descending inhibition. Associative learning then causes the strengthening of connections between relevant input features and activated hidden units, and the formation of "bypass" connections. On future presentations of the same (or a similar) example, the network will then reach a decision more quickly, emulating the chunking of knowledge that takes place in symbolic EBL systems.
A simulation program, IL$\aleph$, provides verification of these claims. In addition, it is demonstrated how the program can learn relevance information. This information can then be transferred to new situations, resulting in very fast discrimination learning under the right set of conditions. The overtraining reversal effect (ORE) is also shown to be a direct consequence of this transfer process.
To achieve true integrated learning, more is needed than the rigid transfer of relevance information, however. The system must be capable of resolving what the author calls the relevance dilemma: EBL considers a limited set of features but has no way of recovering if the discriminating features are outside this set, while SBL entertains a greater set of possibilities, but is therefore inefficient. IL$\aleph$ goes between the horns of this dilemma by keeping features thought to be irrelevant slightly active. It is demonstrated that this procedure allows the system to restore these features to consideration if they turn out to be relevant, without unduly affecting learning time if they are indeed irrelevant. Finally, the model undergoes a further modification so that long chains of sequential reasoning can be simulated in a fixed architecture. Various results are then presented showing how the effects of EBL are achieved in this extended model.
|Rights Information:||Copyright 1990 Katz, Bruce Farrell|
|Date Available in IDEALS:||2011-05-07|
|Identifier in Online Catalog:||AAI9114286|