Color-aware Exposure Correction for Endoscopic Imaging using a Lightweight Vision Transformer Academic Article in Scopus uri icon

abstract

  • Endoscopy is a widely used imaging technique for diagnosing diseases in hollow organs. However, endoscopic images often suffer from limited visibility and can be affected by many imaging artifacts, such as those related to under or overexposure, which can hamper the performance of AI-based diagnostic tools. Addressing this issue is challenging; thus, most previous work has focused on enhancing underexposed images. In this contribution, we propose an extension to the objective function and deep neural network layers of the IAT Vision Transformer model (Illumination Adaptive Transformer), designed initially for enhancing lowlight or ill-exposed images from natural scenes. Our approach specifically targets exposure correction in endoscopic imaging to preserve color and fine-scale details. First, an extra color normalization layer inside the IAT model has been integrated into the model, and secondly, the objective function has been extended with a Laplacian Pyramid Loss to evaluate different image patches, whereas a histogram-aware loss (HistoLoss) has been used to preserve the color quality of the enhanced endoscopic image, both of these combined allow for the output images not only to improve quantitatively but also qualitative wise. We evaluate our method on the Endo4IE dataset and demonstrate significant improvements over a previous method (Endo-LMSPEC) tailored specifically to endoscopic imaging. Compared with the state-of-the-art (Endo-LMSPEC), our approach achieves an SSIM increase of 2.3% and PSNR increase of 0.523 dB for overexposed images, along with an increase of 2.7% and 1.458 dB improvement in PSNR for underexposure, all while employing only ¿ 90k parameters and running at an inference time of ¿ 74 FPS, outperforming existing state-of-the-art methods on the same dataset and objective. © 2024 IEEE.

publication date

  • January 1, 2024