Skip to main content

UniSeg: A Unified Multi-Modal LiDAR Segmentation Network and the OpenPCSeg Codebase

Publication ,  Conference
Liu, Y; Chen, R; Li, X; Kong, L; Yang, Y; Xia, Z; Bai, Y; Zhu, X; Ma, Y; Li, Y; Qiao, Y; Hou, Y
Published in: Proceedings of the IEEE International Conference on Computer Vision
January 1, 2023

Point-, voxel-, and range-views are three representative forms of point clouds. All of them have accurate 3D measurements but lack color and texture information. RGB images are a natural complement to these point cloud views and fully utilizing the comprehensive information of them benefits more robust perceptions. In this paper, we present a unified multi-modal LiDAR segmentation network, termed UniSeg, which leverages the information of RGB images and three views of the point cloud, and accomplishes semantic segmentation and panoptic segmentation simultaneously. Specifically, we first design the Learnable cross-Modal Association (LMA) module to automatically fuse voxel-view and range-view features with image features, which fully utilize the rich semantic information of images and are robust to calibration errors. Then, the enhanced voxel-view and range-view features are transformed to the point space, where three views of point cloud features are further fused adaptively by the Learnable cross-View Association module (LVA). Notably, UniSeg achieves promising results in three public benchmarks, i.e., SemanticKITTI, nuScenes, and Waymo Open Dataset (WOD); it ranks 1st on two challenges of two benchmarks, including the LiDAR semantic segmentation challenge of nuScenes and panoptic segmentation challenges of SemanticKITTI. Besides, we construct the OpenPCSeg codebase, which is the largest and most comprehensive outdoor LiDAR segmentation codebase. It contains most of the popular outdoor LiDAR segmentation algorithms and provides reproducible implementations. The OpenPCSeg codebase will be made publicly available at https://github.com/PJLab-ADG/PCSeg.

Duke Scholars

Published In

Proceedings of the IEEE International Conference on Computer Vision

DOI

ISSN

1550-5499

Publication Date

January 1, 2023

Start / End Page

21605 / 21616
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Liu, Y., Chen, R., Li, X., Kong, L., Yang, Y., Xia, Z., … Hou, Y. (2023). UniSeg: A Unified Multi-Modal LiDAR Segmentation Network and the OpenPCSeg Codebase. In Proceedings of the IEEE International Conference on Computer Vision (pp. 21605–21616). https://doi.org/10.1109/ICCV51070.2023.01980
Liu, Y., R. Chen, X. Li, L. Kong, Y. Yang, Z. Xia, Y. Bai, et al. “UniSeg: A Unified Multi-Modal LiDAR Segmentation Network and the OpenPCSeg Codebase.” In Proceedings of the IEEE International Conference on Computer Vision, 21605–16, 2023. https://doi.org/10.1109/ICCV51070.2023.01980.
Liu Y, Chen R, Li X, Kong L, Yang Y, Xia Z, et al. UniSeg: A Unified Multi-Modal LiDAR Segmentation Network and the OpenPCSeg Codebase. In: Proceedings of the IEEE International Conference on Computer Vision. 2023. p. 21605–16.
Liu, Y., et al. “UniSeg: A Unified Multi-Modal LiDAR Segmentation Network and the OpenPCSeg Codebase.” Proceedings of the IEEE International Conference on Computer Vision, 2023, pp. 21605–16. Scopus, doi:10.1109/ICCV51070.2023.01980.
Liu Y, Chen R, Li X, Kong L, Yang Y, Xia Z, Bai Y, Zhu X, Ma Y, Li Y, Qiao Y, Hou Y. UniSeg: A Unified Multi-Modal LiDAR Segmentation Network and the OpenPCSeg Codebase. Proceedings of the IEEE International Conference on Computer Vision. 2023. p. 21605–21616.

Published In

Proceedings of the IEEE International Conference on Computer Vision

DOI

ISSN

1550-5499

Publication Date

January 1, 2023

Start / End Page

21605 / 21616