Addressing Sparse Annotation: a Novel Semantic Energy Loss for Tumor Cell Detection from Histopathologic Images
Tumor cell detection plays a vital role in immunohistochemistry (IHC) quantitative analysis. While recent remarkable developments in fully-supervised deep learning have greatly contributed to the efficiency of this task, the necessity for manually annotating all cells of specific detection types remains impractical. Obviously, if we directly use full supervision to train these datasets, it can cause error in loss calculation due to the misclassification of unannotated cells as background. To address this issue, we observe that although some cells are omitted during the annotation process, these unannotated cells have a significant feature similarity with the annotated ones. Leveraging this characteristic, we propose a novel calibrated loss named Semantic Energy Loss (SEL). Specifically, our SEL automatically adjusts the loss to be lower for unannotated regions with similar semantic to the labeled ones, while penalizing regions with lager semantic difference. Besides, to prevent all regions from having similar semantics during training, we propose Stretched Feature Loss (SFL) that widen the semantic distance. We evaluate our method on two different IHC datasets and achieve significant performance improvements in both sparse and exhaustive annotation scenarios. Furthermore, we also validate that our method holds significant potential for detecting multiple types of cells. Our code is available at here.