## Frontmatter
| | |
| --- | --- |
| Authors | [[Luca Ambrogioni]], [[Julia Berezutskaya]], [[Umut Güçlü]], [[Eva van den Borne]], [[Yağmur Güçlütürk]], [[Marcel van Gerven]], [[Eric Maris]] |
| Date | 2017/12 |
| Source | [[Conference on Neural Information Processing Systems Workshops]] |
| URL | https://meta-learn.github.io/2017/papers/metalearn17_ambrogioni.pdf |
| Citation | Ambrogioni, L., Berezutskaya, J., Güçlü, U., van den Borne, E., Güçlütürk, Y., van Gerven, M., & Maris, E. (2017). [[Bayesian model ensembling using meta-trained recurrent neural networks]]. In _Conference on Neural Information Processing Systems Workshops_. [[URL](https://meta-learn.github.io/2017/papers/metalearn17_ambrogioni.pdf)]. #Conference |
## Abstract
In this paper we demonstrate that a recurrent neural network meta-trained on an ensemble of arbitrary classification tasks can be used as an approximation of the Bayes optimal classifier. This result is obtained by relying on the framework offree approximate Bayesian inference, where the Bayesian posterior is approximated by training a neural network using synthetic samples. We denote the resulting model as neural ensembler. We show that a single neural ensembler trained on a large set of synthetic data achieves competitive classification performance on multiple real-world classification problems without additional training.
## PDF
![[Bayesian model ensembling using meta-trained recurrent neural networks.pdf]]