ScaleNAS: Multi-Path One-Shot NAS for Scale-Aware High-Resolution Representation
Scale variance among different sizes of body parts and objects is a challenging problem for visual recognition tasks. Existing works usually design a dedicated backbone or apply Neural architecture Search (NAS) for each task to tackle this challenge. However, existing works impose significant limitations on the design or search space. To solve these problems, we present ScaleNAS, a one-shot learning method for exploring scale-aware representations. ScaleNAS solves the limitation of scale representation by searching multi-scale feature aggregation. ScaleNAS adopts a flexible multi-path search space that allows an arbitrary number of blocks and cross-scale feature fusions. To cope with the high search cost incurred by the flexible space, ScaleNAS employs one-shot learning for multi-scale (multi-path) supernet driven by grouped sampling and evolutionary search. Without further retraining, ScaleNet can be directly deployed for different visual recognition tasks with superior performance. We use ScaleNAS to create high-resolution models for two different tasks, ScaleNet-P for human pose estimation and ScaleNet-S for semantic segmentation. ScaleNet-P and ScaleNet-S outperform existing manually crafted and NAS-based methods in both tasks. Using ScaleNet-P for bottom-up human pose estimation, it achieves a new state-of-the-art on COCO test-dev and CrowdPose test. In particular, ScaleNet-P4 achieves 71.3% AP on CrowdPose test, surpassing the previous best result by a large 3.7% AP margin.