Efficient Vehicular Collaborative Perception Based on Saptial-Temporal Feature Compression
In collaborative perception, autonomous vehicles with limited perception capabilities can communicate with each other to achieve a more holographic and effective perception result. Real-world communication systems, however, are usually constrained by wireless communication resources or reliability and cannot handle the enormous real-time data transmissions to support the delay-sensitive collaborative perception. To resolve this issue, we propose an efficient vehicular Collaborative Perception method based on Spatial-temporal feature Compression (CPSC) that exploits the trade-off between perception performance and bandwidth consumption. It performs feature-level compression by focusing on critical regions of perceptual information in the spatial-temporal domain and adapts traffic according to network conditions. To the best of our knowledge, this paper presents the first work where temporal feature redundancy is considered for enhancing the efficiency of collaborative perception. To thoroughly evaluate CPSC, we conduct extensive experiments of collaborative 3D object detection tasks on the real-world dataset DAIR-V2X and the simulated dataset OPV2V. The results show that CPSC outperforms the SOTA collaborative perception methods by 2.91% for AP@0.7 in OPV2V dataset and 1.71% for AP@0.5 in DAIR-V2X dataset. Meanwhile, CPSC attains a communication volume reduction of more than 10 times while consistently outperforming the previous SOTA method.
Duke Scholars
Published In
DOI
EISSN
ISSN
Publication Date
Related Subject Headings
- Automobile Design & Engineering
- 46 Information and computing sciences
- 40 Engineering
- 10 Technology
- 09 Engineering
- 08 Information and Computing Sciences
Citation
Published In
DOI
EISSN
ISSN
Publication Date
Related Subject Headings
- Automobile Design & Engineering
- 46 Information and computing sciences
- 40 Engineering
- 10 Technology
- 09 Engineering
- 08 Information and Computing Sciences