ConViT-Driven Pixel-Level Semantic Segmentation for Multi-Spectral Remote Sensing from a Cybersecurity Perspective: A Comparative Study of Advanced Models

Main Article Content

Chunlai Li
Yifan Ru
Guoliang Tang

Abstract

At a time when the cybersecurity situation is becoming increasingly complex, the land cover classification results of multispectral remote sensing imagery, as critical geospatial data, are facing security threats such as data leakage and tampering, and their security is crucial for the reliability of environmental monitoring, resource management, and other applications. In this study, the performance of six advanced semantic segmentation models, namely ConViT, FCN, HRNet, LinkNet, U-Net, and DeepLabv3+, in land cover classification of multispectral remote sensing imagery is evaluated from a cybersecurity perspective. The study utilises 7140 samples from the C2Seg-BW dataset for training and 1428 samples for testing, which are evaluated by two key metrics, namely, frequency-weighted intersection and merger ratio (FWIoU) and overall accuracy (OA). The results show that the ConViT model is the best performer, with excellent capabilities in handling spatial and spectral features of multispectral data. The unique gated positional self-attention mechanism of the model not only improves the classification accuracy, but also enhances the robustness of the model to data anomalous interference to a certain extent, which helps to safeguard the security of land cover classification data in the network environment. This study provides an effective model selection for the land cover classification task of multispectral remote sensing images, and also provides a valuable reference for optimising the application of deep learning models in the pixel-level land cover mapping task from the perspective of network security.

Article Details

Section
ARTICLES
Author Biographies

Yifan Ru

School of Physics and Optoelectronic Engineering, Hangzhou Institute for Advanced Study, UCAS, Hangzhou 310024, Zhejiang, China

Guoliang Tang

Shanghai Institute of Technical Physics, Chinese Academy of Sciences