Knowledge consolidation using multiple task learning: Overcoming the stability-plasticity problem
LE3 .A278 2014
Master of Science
Lifelong Machine Learning (LML) is concerned with machines that face a series of sequential learning tasks over time, and develop methods of retaining and us- ing prior learned knowledge to assist new learning. This thesis presents a LML system using a context-sensitive multiple task learning (csMTL) neural network, that functions as a consolidated domain knowledge (CDK) network. This re- search focuses on techniques to overcome the stability-plasticity problem in the context of back-propagation neural networks. Our goal is to develop an LML system that retains the knowledge of prior learned tasks without a decrease in the classi cation accuracy as new task knowledge is integrated into the CDK net- work. The experiments focus on the e ective retention of prior knowledge and the e ective consolidation of new knowledge. An independent test set accuracy is used as a measure of e ectiveness. We conduct sequential task learning ex- periments on a synthetic domain of twelve tasks, and three real-world domains. The studies indicate that csMTL CDK networks that have su cient representa- tion (two layers of hidden nodes), and use virtual examples that are generated based on knowledge of the input space result in e ective prior tasks retention and e ective new task consolidation when appropriate learning parameters are used.
The author retains copyright in this thesis. Any substantial copying or any other actions that exceed fair dealing or other exceptions in the Copyright Act require the permission of the author.