On Enhancing Expressive Power via Compositions of Single Fixed-Size ReLU Network
This paper explores the expressive power of deep neural networks through the framework of function compositions. We demonstrate that the repeated compositions of a single fixed-size ReLU network exhibit surprising expressive power, despite the limited expressive capabilities of the individual network itself. Specifically, we prove by construction that L2◦g◦r◦L1 can approximate 1-Lipschitz continuous functions on [0, 1]d with an error O(r−1/d), where g is realized by a fixed-size ReLU network, L1 and L2 are two affine linear maps matching the dimensions, and g◦r denotes the r-times composition of g. Furthermore, we extend such a result to generic continuous functions on [0, 1]d with the approximation error characterized by the modulus of continuity. Our results reveal that a continuous-depth network generated via a dynamical system has immense approximation power even if its dynamics function is time-independent and realized by a fixed-size ReLU network.