This project presents an innovative approach based on the research paper https://www.mdpi.com/2079-9292/10/15/1823 to improve object classification in side-scan sonar images, particularly relevant for underwater rescue missions and the detection of underwater targets like shipwrecks and aircraft crashes. The challenge lies in the scarcity of suitable datasets and the imbalanced distribution of real side-scan sonar data. To overcome these obstacles, the project leverages synthetic data and transfer learning. It begins by using optical images as inputs and employs a style transfer network to create "simulated side-scan sonar images" that mimic the appearance of real sonar data. These simulated images are then used to fine-tune a pre-trained convolutional neural network (CNN) that was initially learned from a large dataset of optical images from ImageNet. The experimental results show that this approach can achieve a remarkable maximum accuracy of up to 97.32% in target classification, effectively reducing the workload of staff and mitigating subjective errors caused by visual fatigue.
In summary, this project contributes significantly to the field of side-scan sonar image classification by combining the power of synthetic data generation and transfer learning. By incorporating "simulated side-scan sonar images" with a pre-trained CNN, the classification accuracy in side-scan sonar images is significantly enhanced. This aids in improving the efficiency and reliability of underwater rescue missions. The innovative combination of techniques in this project offers a valuable tool for advancing the automation of object classification in side-scan sonar images, bringing practical benefits to the maritime and underwater search and rescue communities.