Delta-NAS: Difference of Architecture Encoding for Predictor-Based Evolutionary Neural Architecture Search
Neural Architecture Search (NAS) continues to serve a key roll in the design and development of neural networks for task specific deployment. Modern NAS techniques struggle to deal with ever increasing search space complexity and compute cost constraints. Existing approaches can be categorized into two buckets: fine-grained computational expensive NAS and coarse-grained low cost NAS. Our ob-jective is to craft an algorithm with the capability to per-form fine-grain NAS at a low cost. We propose projecting the problem to a lower dimensional space through pre-dicting the difference in accuracy of a pair of similar net-works. This paradigm shift allows for reducing computational complexity from exponential down to linear with re-spect to the size of the search space. We present a strong mathematical foundation for our algorithm in addition to extensive experimental results across a host of common NAS Benchmarks. Our methods significantly out performs ex-isting works achieving better performance coupled with a significantly higher sample efficiency.