Skip to main content

Evolving Decomposed Plasticity Rules for Information-Bottlenecked Meta-Learning

Publication ,  Journal Article
Wang, F; Tian, H; Xiong, H; Wu, H; Fu, J; Cao, Y; Kang, Y; Wang, H
Published in: Transactions on Machine Learning Research
September 1, 2022

Artificial neural networks (ANNs) are typically confined to accomplishing pre-defined tasks by learning a set of static parameters. In contrast, biological neural networks (BNNs) can adapt to various new tasks by continually updating the neural connections based on the inputs, which is aligned with the paradigm of learning effective learning rules in addition to static parameters, e.g., meta-learning. Among various biologically inspired learning rules, Hebbian plasticity updates the neural network weights using local signals without the guide of an explicit target function, thus enabling an agent to learn automatically without human efforts. However, typical plastic ANNs using a large amount of meta-parameters violate the nature of the genomics bottleneck and potentially deteriorate the generalization capacity. This work proposes a new learning paradigm decomposing those connection-dependent plasticity rules into neuron-dependent rules thus accommodating Θ(n2) learnable parameters with only Θ(n) meta-parameters. We also thoroughly study the effect of different neural modulation on plasticity. Our algorithms are tested in challenging random 2D maze environments, where the agents have to use their past experiences to shape the neural connections and improve their performances for the future. The results of our experiment validate the following: 1. Plasticity can be adopted to continually update a randomly initialized RNN to surpass pre-trained, more sophisticated recurrent models, especially when coming to long-term memorization. 2. Following the genomics bottleneck, the proposed decomposed plasticity can be comparable to or even more effective than canonical plasticity rules in some instances.

Duke Scholars

Published In

Transactions on Machine Learning Research

EISSN

2835-8856

Publication Date

September 1, 2022

Volume

2022 September
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Wang, F., Tian, H., Xiong, H., Wu, H., Fu, J., Cao, Y., … Wang, H. (2022). Evolving Decomposed Plasticity Rules for Information-Bottlenecked Meta-Learning. Transactions on Machine Learning Research, 2022 September.
Wang, F., H. Tian, H. Xiong, H. Wu, J. Fu, Y. Cao, Y. Kang, and H. Wang. “Evolving Decomposed Plasticity Rules for Information-Bottlenecked Meta-Learning.” Transactions on Machine Learning Research 2022 September (September 1, 2022).
Wang F, Tian H, Xiong H, Wu H, Fu J, Cao Y, et al. Evolving Decomposed Plasticity Rules for Information-Bottlenecked Meta-Learning. Transactions on Machine Learning Research. 2022 Sep 1;2022 September.
Wang, F., et al. “Evolving Decomposed Plasticity Rules for Information-Bottlenecked Meta-Learning.” Transactions on Machine Learning Research, vol. 2022 September, Sept. 2022.
Wang F, Tian H, Xiong H, Wu H, Fu J, Cao Y, Kang Y, Wang H. Evolving Decomposed Plasticity Rules for Information-Bottlenecked Meta-Learning. Transactions on Machine Learning Research. 2022 Sep 1;2022 September.

Published In

Transactions on Machine Learning Research

EISSN

2835-8856

Publication Date

September 1, 2022

Volume

2022 September