No CrossRef data available.
Published online by Cambridge University Press: 30 June 2025
With the rapid advancements in robotics and autonomous driving, SLAM (simultaneous localization and mapping) has become a crucial technology for real-time localization and map creation, seeing widespread application across various domains. However, SLAM’s performance in dynamic environments is often compromised due to the presence of moving objects, which can introduce errors and inconsistencies in localization and mapping. To overcome these challenges, this paper presents a visual SLAM system that employs dynamic feature point rejection. The system leverages a lightweight YOLOv7 model for detecting dynamic objects and performing semantic segmentation. Additionally, it incorporates optical flow tracking and multiview geometry techniques to identify and eliminate dynamic feature points. This approach effectively mitigates the impact of dynamic objects on the SLAM process, while maintaining the integrity of static feature points, ultimately enhancing the system’s robustness and accuracy in dynamic environments. Finally, we evaluate our method on the TUM RGB-D dataset and in real-world scenarios. The experimental results demonstrate that our approach significantly reduces both the root mean square error (RMSE) and standard deviation (Std) compared to the ORB-SLAM2 algorithm.