Surrogate Modeling for Efficient Evolutionary Multi-Objective Neural Architecture Search in Super Resolution Image Restoration
Academic Article in Scopus
Overview
Identity
Additional document info
View All
Overview
abstract
Fully training each candidate architecture generated during the Neural Architecture Search (NAS) process is computationally expensive. To overcome this issue, surrogate models approximate the performance of a Deep Neural Network (DNN), considerably reducing the computational cost and, thus, democratizing the utilization of NAS techniques. This paper proposes an XGBoost-based surrogate model to predict the PeakSignal-to-Noise Ratio (PSNR) of DNNs for Super-Resolution Image Restoration (SRIR) tasks. In addition to maximizing PSNR, we also focus on minimizing the number of learnable parameters and the total number of floating-point operations. We use the Non-dominated Sorting Genetic Algorithm III (NSGA-III) to tackle this three-objective optimization NAS problem. Our experimental results indicate that NSGA-III using our XGBoost-based surrogate model is significantly faster than using full or partial training of the candidate architectures. Moreover, some selected architectures are comparable in quality to those found using partial training. Consequently, our XGBoost-based surrogate model offers a promising approach to accelerate the automatic design of architectures for SRIR, particularly in resource-constrained environments, decreasing computing time. © 2024 by SCITEPRESS ¿ Science and Technology Publications, Lda.
status
publication date
published in
Identity
Digital Object Identifier (DOI)
Additional document info
has global citation frequency
start page
end page
volume