This is a great question! There are at least two scenarios where having a meta-learning setup could help. The first is to learn the fast RL algorithm in simulation, by having a distribution over real world environments (varying physics, textures, etc), and then run it in the real world. The second is to learn a fast RL algorithm over a wide range of tasks (for example, on a set of training games in Universe), and (hopefully) generalize to unseen games, analogous to generalization in supervised learning.
Actually TensorFuse does support RNN by porting a subset of scan to TensorFlow. I've been using it to port Lasagne to support TensorFlow: https://github.com/dementrock/Lasagne-tf. The examples/recurrent.py there actually works.
This is a great question! There are at least two scenarios where having a meta-learning setup could help. The first is to learn the fast RL algorithm in simulation, by having a distribution over real world environments (varying physics, textures, etc), and then run it in the real world. The second is to learn a fast RL algorithm over a wide range of tasks (for example, on a set of training games in Universe), and (hopefully) generalize to unseen games, analogous to generalization in supervised learning.