Leveraging seed dictionaries to improve dictionary learning
Most state-of-the-art dictionary learning algorithms (DLAs) are iterative, and must begin with an initial estimate of the dictionary, referred to as the seed. A seed can be generated randomly, but it has been shown that choosing a more intelligent seed often yields a better solution. For example, a seed inferred using data from a related problem, or one handcrafted based on a priori knowledge of the problem at hand can yield better solutions. Seed dictionaries appear to encode valuable a priori information however, most DLAs discard the seed after initialization. This work investigates the questions of whether the information encoded in a good seed can be leveraged further, by potentially using the seed to influence learning after initialization. This is achieved by modifying the popular DLA K-SVD to use the seed as a prior during learning, by penalizing differences between the learned dictionary and the seed. The resulting algorithm, referred to as Seed Shrinkage Dictionary Learning (SSDL), is examined against K-SVD on image denoising experiments using several benchmark images. The results indicate that utilizing the seed as a prior in this way consistently yields improved denoising performance in our experiments. This simple approach motivates the development of more sophisticated approaches that leverage a priori information in useful seeds.