Decoding the Encoder
Autoencoders are used in a variety of safety-critical applications. Uncertainty quantification is a key component to bolster the trustworthiness of such models. With the growing complexity of the autoencoder design and the dataset they are trained on, there is a dwindling correlation between the input and feature space representation. To address this latent space degeneracy, we propose a novel method of monotonically perturbing the encoded latent space to increase the entropy in the learned representations for every corresponding input. For every perturbation, we obtain a unique decoded signature corresponding to an evaluation metric in the continuous domain, which can be clustered to build a knowledge base and subsequently analyzed for outlier analysis. For the test cases, in the absence of ground truth, we can perturb the latent space representation and find the closest match of the test cases' unique signatures to the existing knowledge base for uncertainty quantification and outlier detection. We evaluate our proposed novel method on glomeruli segmentation for frozen kidney donor section on whole slide imaging, a safety-critical application in digital pathology which serves as a precursor to kidney transplantation. We prove the proposed method's effectiveness for outlier detection by ranking the test cases according to their associated uncertainties to leverage the attention of medical experts on boundary cases.