Abstract:
To address the challenges of diverse morphology, blurred boundaries, and frequent occlusions of roadside traffic objects in complex urban environments—which collectively lead to unstable point cloud perception accuracy—this study proposes a novel method for precise perception of roadside traffic objects. By introducing boundary constraints into the point cloud feature learning process, the proposed method effectively mitigates interference among different object categories. Meanwhile, a multi-scale feature fusion strategy is incorporated to jointly capture local details and global structural information, thereby enhancing recognition performance in complex road scenarios. Experiments are conducted on three datasets collected from different urban scenes using vehicle-mounted LiDAR systems to validate the effectiveness of the proposed approach. The results demonstrate that the proposed model achieves an average Intersection over Union (mIoU) of 89.48% and a roadside traffic object IoU of 87.91% across the three datasets. Furthermore, the method attains IoU scores of 94.90%, 93.36%, 90.14%, and 88.56% for roadside traffic objects on expressways, arterial roads, secondary trunk roads, and branch roads, respectively. Overall, the proposed method consistently improves the recognition accuracy of roadside traffic objects across diverse road environments, exhibiting strong robustness and generalization capability, particularly in densely populated and heavily occluded urban street scenes. The findings of this study provide a reliable data foundation for advancing the digital transformation of transportation infrastructure.