详细信息
Advanced Image Steganography Using a U-Net-Based Architecture with Multi-Scale Fusion and Perceptual Loss ( SCI-EXPANDED收录)
文献类型:期刊文献
英文题名:Advanced Image Steganography Using a U-Net-Based Architecture with Multi-Scale Fusion and Perceptual Loss
作者:Zeng, Lu[1,2];Yang, Ning[1,2];Li, Xiang[1,2];Chen, Aidong[2,3];Jing, Hongyuan[2,3];Zhang, Jiancheng[2]
第一作者:Zeng, Lu
通讯作者:Chen, AD[1];Chen, AD[2]
机构:[1]Beijing Union Univ, Beijing Key Lab Informat Serv Engn, Beijing 100101, Peoples R China;[2]Beijing Union Univ, Coll Robot, Beijing 100101, Peoples R China;[3]Res Ctr Multiintelligent Syst, Beijing 100101, Peoples R China
第一机构:北京联合大学北京市信息服务工程重点实验室
通讯机构:[1]corresponding author), Beijing Union Univ, Coll Robot, Beijing 100101, Peoples R China;[2]corresponding author), Res Ctr Multiintelligent Syst, Beijing 100101, Peoples R China.|[1141739]北京联合大学机器人学院;[11417]北京联合大学;
年份:2023
卷号:12
期号:18
外文期刊名:ELECTRONICS
收录:;Scopus(收录号:2-s2.0-85172915793);WOS:【SCI-EXPANDED(收录号:WOS:001071737400001)】;
基金:The authors wish to thank the editors and reviewers for their valuable comments and helpful suggestions which have greatly improved the paper's quality.
语种:英文
外文关键词:image steganography; generative adversarial network; perceptual path length; multi-scale fusion
摘要:Image-to-image steganography refers to the practice of hiding secret image within a cover image, serving as a crucial technique for secure communication and data protection. Existing image-to-image generative adversarial network-based steganographic methods for image hiding demonstrate a high embedding capacity. However, there is still significant room for improvement in terms of the quality of the stego images and the extracted secret images. In this study, we propose an architecture for inconspicuously hiding an image within the Y channel of another image, leveraging a U-Net network and a multi-scale fusion ExtractionBlock. The network is jointly trained using a loss function combining Perceptual Path Length (PPL) and Mean Square Error (MSE). The proposed network is trained and tested on two datasets, Labeled Faces in the Wild and Pascal visual object classes. Experimental results demonstrate that the model not only achieves high invisibility and significant hiding capacity (8 bits per pixel) without altering the color information of the cover image but also exhibits strong generalization ability. Additionally, we introduce the Modified Multi-Image Similarity Metric (MMISM), which integrates the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Metric (SSIM) values of images, to comprehensively evaluate the network's hiding and extraction capabilities.
参考文献:
正在载入数据...