Neural Sequence Transformation

Journal Article (Journal Article)

Monte Carlo integration is a technique for numerically estimating a definite integral by stochastically sampling its integrand. These samples can be averaged to make an improved estimate, and the progressive estimates form a sequence that converges to the integral value on the limit. Unfortunately, the sequence of Monte Carlo estimates converges at a rate of O(), where n denotes the sample count, effectively slowing down as more samples are drawn. To overcome this, we can apply sequence transformation, which transforms one converging sequence into another with the goal of accelerating the rate of convergence. However, analytically finding such a transformation for Monte Carlo estimates can be challenging, due to both the stochastic nature of the sequence, and the complexity of the integrand. In this paper, we propose to leverage neural networks to learn sequence transformations that improve the convergence of the progressive estimates of Monte Carlo integration. We demonstrate the effectiveness of our method on several canonical 1D integration problems as well as applications in light transport simulation.

Full Text

Duke Authors

Cited Authors

  • Mukherjee, S; Hua, BS; Umetani, N; Meister, D

Published Date

  • October 1, 2021

Published In

Volume / Issue

  • 40 / 7

Start / End Page

  • 131 - 140

Electronic International Standard Serial Number (EISSN)

  • 1467-8659

International Standard Serial Number (ISSN)

  • 0167-7055

Digital Object Identifier (DOI)

  • 10.1111/cgf.14407

Citation Source

  • Scopus