|Abstract:||We believe one of the most promising but under-explored research areas in machine learning today is the integration of prior domain knowledge into the learning process. For a learning system, both training examples and domain knowledge provide information about the particular concept to be acquired. While most of the research works aim at exploring new algorithms and techniques for extracting information from a training set, how to use prior domain knowledge is largely ignored. Such ignorance is often not due to the unimportance of the domain knowledge for a learning task. It is well known that a learning algorithm requires inductive bias in order to generalize beyond the training examples. It is desirable to use domain knowledge to introduce domain-specific bias into learning systems. The fact that using prior domain knowledge in machine learning is under-explored is largely a result of difficulties of utilizing information in the domain knowledge. The first difficulty is that prior knowledge is usually domain-specific, which makes difficult to generalize the algorithms that aims at utilizing the knowledge. Moreover, domain knowledge is usually expressed in experts' high-level vocabulary, which is often different from the vocabulary used to describe the examples. Finally, domain knowledge could be only approximate and imprecise, therefore directly incorporate such knowledge into inductive learning system could introduce harmful bias. This research aims at avoiding or alleviating the above problems by using Explanation Based Learning (EBL) to mediate between the evidence in prior knowledge and the evidence in the training examples. Conventional EBL (DeJong, 1997) uses domain knowledge to explain the training examples, and generalize the explanations to obtain some deeper patterns, which, if believed, commits the learner to assigning classification labels to many unseen examples. In this work, we introduce a new learning framework, where those patterns obtained by EBL are used to introduce further inductive bias into a learning system. In this framework, EBL can be viewed as a mechanism to transform high-level domain-specific knowledge into special solution knowledge, which can then be used to introduce inductive bias into learning systems. We implemented our proposed explanation-based learning framework with three different approaches: phantom example approach, feature kernel approach and explanation-augmented SVM approach. In these approaches, we choose Support Vector Machines (SVM) as the inductive learner to demonstrate how domain knowledge can improve the performance of a learning system. SVM is a relatively new and successful approach to classification learning. It is still a challenging problem to incorporate knowledge into SVMs. In this work, we present both theoretical and empirical results to show that our approaches use domain knowledge to improve SVM's performance. The most novel aspect of our work is that EBL procedure encourages interactions between prior knowledge and the training examples. This allows our techniques to utilized information in the domain knowledge, which is otherwise difficult to incorporate into SVMs. Moreover, the inductive bias introduced into SVMs is calibrated for the given examples distribution, which potentially makes our approach more robust. We also present the comparison of the three proposed approaches, discuss about the related work, and point out some future work. We believe this work provide a first step towards a new research area in machine learning.