Explore Epistemic Uncertainty in Domain Adaptive Semantic Segmentation
In domain adaptive segmentation, domain shift may cause erroneous high-confidence predictions on the target domain, resulting in poor self-training. To alleviate the potential error, most previous works mainly consider aleatoric uncertainty arising from the inherit data noise. This may however lead to overconfidence in incorrect predictions and thus limit the performance. In this paper, we take advantage of Deterministic Uncertainty Methods (DUM) to explore the epistemic uncertainty, which reflects accurately the domain gap depending on the model choice and parameter fitting trained on source domain. The epistemic uncertainty on target domain is evaluated on-the-fly to facilitate online reweighting and correction in the self-training process. Meanwhile, to tackle the class-wise quantity and learning difficulty imbalance problem, we introduce a novel data resampling strategy to promote simultaneous convergence across different categories. This strategy prevents the class-level over-fitting in source domain and further boosts the adaptation performance by better quantifying the uncertainty in target domain. We illustrate the superiority of our method compared with the state-of-the-art methods.