Files in this item



application/pdfAn Empirical Study of Meta-Learning.pdf (718kB)
(no description provided)PDF


Title:An Empirical Study of Meta-Learning: a step towards rigorously understanding meta-learning algorithms
Author(s):Brando Miranda
artificial general intelligence
learning to learn
few shot learning
machine learning
Abstract:It has been recently observed that a good embedding is all we needed to solve many few-shot learning benchmarks. In addition, other work has strongly suggested that MAML mostly works via this same method: by learning a good embedding. This highlights our lack of understanding of what meta-learning algorithms are doing and when they work. In this work we provide preliminary results that shed some light towards understanding meta-learning algorithms better. In particular we identify 3 interesting properties: 1) It's possible to define a synthetic task that results in higher degree of meta-adaptation, thus suggesting that current few-shot learning benchmarks might not have the properties needed for the success of meta-learning algorithms 2) meta-overfitting occurs when the number of classes (or concepts) are finite and this issue disappears once the task has an unbounded number of concepts 3) more adaptation for MAML does not necessarily result in representations that have adapted more or even perform better. Finally, we suggest that to understand meta-learning algorithms better it is imperative that we go beyond tracking only absolute performance and in addition formally quantify the degree of meta-learning and track both metrics together. Reporting results in future work this way should help us identify the sources of meta-overfitting more accurately and hopefully design more flexible meta-learning algorithms. In the appendix we also discuss that quantifying AI safety too is important but is left as future work.
Issue Date:2020-12-23
Genre:Technical Report
Date Available in IDEALS:2020-12-23

This item appears in the following Collection(s)

Item Statistics