Files
Abstract
Feedback driven deep reinforcement learning methodologies are widely favoured approaches to solving artificial intelligence problems. The algorithms navigate complex decision-making tasks without manual state space engineering. Notable problems considered out of reach by machines, like mastering Go, StarCraft, Dota2 and Atari 2600 games were solved successfully. However, these algorithms require extensive amounts of time and data to specialize in one problem. Transfer learning strategies approach this challenge by supplementing the reinforcement learning task with shareable knowledge from a source task. The source provides basal knowledge about the environment, thus reducing the time to learn. Additionally, these strategies can be adopted in the multi-task learning domain. This strategy favours building generalized representations capable of solving different problems simultaneously using shared representation from similar tasks as a source of inductive bias. The addition of transfer learning helps to ground the simultaneous training of different learning objectives.This thesis employs the transfer learning paradigm Practice as an auxiliary task to improve the generalization capabilities of a distributed deep reinforcement learning algorithm in the multi-task problem setting. The algorithm implements distributed learners that optimize on different task objectives and contribute to training a single representation proficient in all objectives. Experimental results indicate that Practice's addition of state dynamics information effectively improves the generalization capabilities of the algorithm. Additionally, the contributions of each learner are analyzed to study their impact on the overall multi-task learning objective.