登录    注册    忘记密码

详细信息

SREF: Semantics-Refined Feature Extraction for Long-Term Visual Localization  ( EI收录)  

文献类型:期刊文献

英文题名:SREF: Semantics-Refined Feature Extraction for Long-Term Visual Localization

作者:Wu, Danfeng[1,2]; Zhu, Kaifeng[1,2]; Shi, Heng[3]; Zhou, Fenfen[1,2]; Kuang, Minchi[3]

第一作者:Wu, Danfeng

机构:[1] Beijing Key Laboratory of Information Service Engineering, Beijing Union University, Beijing, 100101, China; [2] College of Robotics, Beijing Union University, Beijing, 100101, China; [3] Department of Precision Instrument, Tsinghua University, Beijing, 100084, China

第一机构:北京联合大学北京市信息服务工程重点实验室

通讯机构:[3]Department of Precision Instrument, Tsinghua University, Beijing, 100084, China

年份:2026

卷号:12

期号:2

外文期刊名:Journal of Imaging

收录:EI(收录号:20260920179390)

语种:英文

外文关键词:Air navigation - Deep learning - Extraction - Robotics - Semantics - Visualization

摘要:Accurate and robust visual localization under changing environments remains a fundamental challenge in autonomous driving and mobile robotics. Traditional handcrafted features often degrade under long-term illumination and viewpoint variations, while recent CNN-based methods, although more robust, typically rely on coarse semantic cues and remain vulnerable to dynamic objects. In this paper, we propose a fine-grained semantics-guided feature extraction framework that adaptively selects stable keypoints while suppressing dynamic disturbances. A fine-grained semantic refinement module subdivides coarse semantic categories into stability-homogeneous sub-classes, and a dual-attention mechanism enhances local repeatability and semantic consistency. By integrating physical priors with self-supervised clustering, the proposed framework learns discriminative and reliable feature representations. Extensive experiments on the Aachen and RobotCar-Seasons benchmarks demonstrate that the proposed approach achieves state-of-the-art accuracy and robustness while maintaining real-time efficiency, effectively bridging coarse semantic guidance with fine-grained stability estimation. Quantitatively, our method achieves strong localization performance on Aachen (up to 88.1% at night under the (Formula presented.) threshold) and on RobotCar-Seasons (up to 57.2%/28.4% under the same threshold for day/night), demonstrating improved robustness to seasonal and illumination changes. ? 2026 by the authors.

参考文献:

正在载入数据...

版权所有©北京联合大学 重庆维普资讯有限公司 渝B2-20050021-8 
渝公网安备 50019002500408号 违法和不良信息举报中心