On a New Type of Neural Computation for Probabilistic Symbolic Reasoning
New types of neural computations, i.e., methods of computing neuron activities based on other neurons' activities and connectivity strengths, are continuously pushing the boundary of more powerful neural networks. For example, the attention mechanism with dynamic weight computation enables the Transformer-family models significantly surpass old models with fixed-weight transformations. We ask whether there exist more powerful types of neural computations that further surpass attention-based models, to reason under uncertainty and manipulate abstract symbols? Using probabilistic programming as the mathematical framework for capturing probabilistic and symbolic reasoning, we develop the first neural computation that allows the execution of probabilistic programs on neural networks. Evaluation on early language acquisition tasks shows the advantage of our method to learn abstract rules from raw data, a capability not possessed by classical neural networks.