| Literature DB >> 26659050 |
Brenden M Lake1, Ruslan Salakhutdinov2, Joshua B Tenenbaum3.
Abstract
People learning new concepts can often generalize successfully from just a single example, yet machine learning algorithms typically require tens or hundreds of examples to perform with similar accuracy. People can also use learned concepts in richer ways than conventional algorithms-for action, imagination, and explanation. We present a computational model that captures these human learning abilities for a large class of simple visual concepts: handwritten characters from the world's alphabets. The model represents concepts as simple programs that best explain observed examples under a Bayesian criterion. On a challenging one-shot classification task, the model achieves human-level performance while outperforming recent deep learning approaches. We also present several "visual Turing tests" probing the model's creative generalization abilities, which in many cases are indistinguishable from human behavior.Entities:
Mesh:
Year: 2015 PMID: 26659050 DOI: 10.1126/science.aab3050
Source DB: PubMed Journal: Science ISSN: 0036-8075 Impact factor: 47.728