Improving disentanglement-based image-to-image translation with feature joint block fusion
Image-to-image translation aims to change attributes or domains of images, where the feature disentanglement based method is widely used recently due to its feasibility and effectiveness. In this method, a feature extractor is usually integrated in the encoder-decoder architecture generative adversarial network (GAN), which extracts features from domains and images, respectively. However, the two types of features are not properly combined, resulting in blurry generated images and indistinguishable translated domains. To alleviate this issue, we propose a new feature fusion approach to leverage the ability of the feature disentanglement. Instead of adding the two extracted features directly, we design a joint block fusion that contains integration, concatenation, and squeeze operations, thus allowing the generator to take full advantage of the two features and generate more photo-realistic images. We evaluate both the classification accuracy and Fréchet Inception Distance (FID) of the proposed method on two benchmark datasets of Alps Seasons and CelebA. Extensive experimental results demonstrate that the proposed joint block fusion can improve both the discriminability of domains and the quality of translated image. Specially, the classification accuracies are improved by 1.04% (FID reduced by 1.22) and 1.87% (FID reduced by 4.96) on Alps Seasons and CelebA, respectively.
Duke Scholars
DOI
Publication Date
Volume
Start / End Page
Related Subject Headings
- Artificial Intelligence & Image Processing
- 46 Information and computing sciences
Citation
DOI
Publication Date
Volume
Start / End Page
Related Subject Headings
- Artificial Intelligence & Image Processing
- 46 Information and computing sciences