4D Radar - A Novel Sensing Paradigm for 3D Object Detection
【If you want to add anything to this repository, please create a PR to github.com/liuzengyun/Awesome-3D-Detection-with-4D-Radar or email to i@cvzoo.cn】 Overview Datasets SOTA Papers From 4D Radar Point Cloud From 4D Radar Tensor Fusion of 4D Radar & LiDAR Fusion of 4D Radar & RGB Camera Others Survey Papers Basic Knowledge Representative researchers Datasets DatasetSensorsRadar DataSourceAnnotationsurlOther Astyx4D Radar,LiDAR, CameraPC19’EuRAD3D bboxgithub paper~500 frames RADIal4D Radar,LiDAR, CameraPC, ADC, RT22’CVPR2D bbox, seggithub paper8,252 labeled frames View-of-Delft(VoD)4D Radar,LiDAR, Stereo CameraPC22’RA-L3D bboxwebsite8,693 frames TJ4DRadSet4D Radar,LiDAR, Camera, GNSSPC22’ITSC3D bbox, TrackIDgithub paper7,757 frames K-Radar4D Radar,LiDAR, Stereo Camera, RTK-GPSRT22’NeurIPS3D bbox, TrackIDgithub paper35K frames; 360° Camera Dual Radardual 4D Radars,LiDAR, CameraPC23’arXiv3D bbox, TrackIDpaper10K frames L-RadSet4D Radar,LiDAR, 3 CamerasPC24’TIV3D bbox, TrackIDgithub paper11.2K frames; Annos range to 220m ZJUODset4D Radar,LiDAR, CameraPC23’ICVISP3D bbox, 2D bboxgithub paper19,000 frames of raw data and 3,800 annotated frames. CMD32-beam LiDAR, 128-beam LiDAR, solid-state LiDAR, 4D Radar, 3 CamerasPC24’ECCV3D bboxgithub paper50 high-quality sequences, each spanning 20 seconds, equating to 200 frames per sensor V2X-R4D Radar,LiDAR, Camera (simulated)PC24’arXiv3D bboxgithub paperV2X-R contains 12,079 scenarios with 37,727 frames of LiDAR and 4D radar point clouds, 150,908 images OmniHD-Scenes6 4D Radars,LiDAR, 6 Cameras, IMUPC24’arXiv3D bbox, TrackID, OCCwebsite papertotaling more than 450K synchronized frames ………………………………………………………………………… SOTA Papers From 4D Radar Point Cloud RPFA-Net: a 4D RaDAR Pillar Feature Attention Network for 3D Object Detection (21’ITSC) Link: paper code Affiliation: Tsinghua University (Xinyu Zhang) Dataset: Astyx Note: Multi-class road user detection with 3+1D radar in the View-of-Delft dataset (22’RA-L) Link: paper Affiliation: Dataset: VoD Note: baseline of VoD SMURF: Spatial multi-representation fusion for 3D object detection with 4D imaging radar (23’TIV) Link: paper Affiliation: Beihang University (Bing Zhu) Dataset: VoD, TJ4DRadSet Note: PillarDAN: Pillar-based Dual Attention Attention Network for 3D Object Detection with 4D RaDAR (23’ITSC) Link: paper Affiliation: Shanghai Jiao Tong University (Lin Yang) Dataset: Astyx Note: MVFAN: Multi-view Feature Assisted Network for 4D Radar Object Detection (23’ICONIP) Link: paper Affiliation: Nanyang Technological University Dataset: Astyx, VoD Note: SMIFormer: Learning Spatial Feature Representation for 3D Object Detection from 4D Imaging Radar via Multi-View Interactive Transformers (23’Sensors) Link: paper Affiliation: Tongji University Dataset: VoD Note: 3-D Object Detection for Multiframe 4-D Automotive Millimeter-Wave Radar Point Cloud (23’IEEE Sensors Journal) Link: paper Affiliation: Tongji University (Zhixiong Ma) Dataset: TJ4DRadSet Note: RMSA-Net: A 4D Radar Based Multi-Scale Attention Network for 3D Object Detection (23’ISCSIC) Link: paper Affiliation: Nanjing University of Aeronautics and Astronautics (Jie Hao) Dataset: HR4D (self-collected and not open source) Note: RadarPillars: Efficient Object Detection from 4D Radar Point Clouds (24’arXiv) Link: paper Affiliation: Mannheim University of Applied Sciences, Germany Dataset: VoD Note: VA-Net: 3D Object Detection with 4D Radar Based on Self-Attention (24’CVDL) Link: paper Affiliation: Hunan Normal University (Bo Yang) Dataset: VoD Note: RTNH+: Enhanced 4D Radar Object Detection Network using Two-Level Preprocessing and Vertical Encoding (24’TIV) Link: code paper Affiliation: KAIST (Seung-Hyun Kong) Dataset: K-Radar Note: The enhanced baseline of K-Radar. RaTrack: Moving Object Detection and Tracking with 4D Radar Point Cloud (24’ICRA) Link: code Affiliation: Royal College of Art, University College London (Chris Xiaoxuan Lu) Dataset: VoD Note: Feature Fusion and Interaction Network for 3D Object Detection based on 4D Millimeter Wave Radars (24’CCC) Link: paper Affiliation: University of Science and Technology of China (Qiang Ling) Dataset: VoD Note: Sparsity-Robust Feature Fusion for Vulnerable Road-User Detection with 4D Radar (24’Applied Sciences) Link: paper Affiliation: Mannheim University of Applied Sciences (Oliver Wasenmüller) Dataset: VoD Note: Enhanced 3D Object Detection using 4D Radar and Vision Fusion with Segmentation Assistance (24’preprint) Link: paper code Affiliation: Beijing Institute of Technology (Xuemei Chen) Dataset: VoD Note: RadarPillarDet: Multi-Pillar Feature Fusion with 4D Millimeter-Wave Radar for 3D Object Detection (24’SAE Technical Paper) Link: paper Affiliation: Tongji University (Zhixiong Ma) Dataset: VoD Note: MUFASA: Multi-View Fusion and Adaptation Network with Spatial Awareness for Radar Object Detection (24’ICANN) Link: paper Affiliation: Technical University of Munich (Xiangyuan Peng) Dataset: VoD, TJ4DRadSet Note: Multi-Scale Pillars Fusion for 4D Radar Object Detection with Radar Data Enhancement (24’IEEE Sensors Journal) Link: paper Affiliation: Chinese Academy of Sciences (Zhe Zhang) Dataset: VoD Note: SCKD: Semi-Supervised Cross-Modality Knowledge Distillation for 4D Radar Object Detection (25’AAAI) Link: paper code(unfilled project) Affiliation: Zhejiang University (Zhiyu Xiang) Dataset: VoD, ZJUODset Note: The teacher is a Lidar-Radar bi-modality fusion network, while the student is a radaronly network. By the effective knowledge distillation of the teacher, the student can learn to extract sophisticated feature from the radar input and boost its detection performance. From 4D Radar Tensor Towards Robust 3D Object Detection with LiDAR and 4D Radar Fusion in Various Weather Conditions (24’CVPR) Link: paper code Affiliation: KAIST (Yujeong Chae) Dataset: K-Radar Note: This method takes LiDAR point cloud, 4D radar tensor (not point cloud) and image as input. CenterRadarNet: Joint 3D Object Detection and Tracking Framework using 4D FMCW Radar (24’ICIP) Link: paper Affiliation: University of Washington (Jen-Hao Cheng) Dataset: K-Radar Note: Fusion of 4D Radar & LiDAR InterFusion: Interaction-based 4D Radar and LiDAR Fusion for 3D Object Detection (22’IROS) Link: paper Affiliation: Tsinghua University (Li Wang) Dataset: Astyx Note: Multi-Modal and Multi-Scale Fusion 3D Object Detection of 4D Radar and LiDAR for Autonomous Driving (23’TVT) Link: paper Affiliation: Tsinghua University (Li Wang) Dataset: Astyx Note: L4DR: LiDAR-4DRadar Fusion for Weather-Robust 3D Object Detection (24’arXiv) Link: paper Affiliation: Xiamen University Dataset: VoD, K-Radar Note: For the K-Radar dataset, we preprocess the 4D radar spar setensor by selecting only the top 10240 points with high power measurement. This paper is submitted to 25’AAAI. Robust 3D Object Detection from LiDAR-Radar Point Clouds via Cross-Modal Feature Augmentation (24’ICRA) Link: paper code Affiliation: University of Edinburgh, University College London (Chris Xiaoxuan Lu) Dataset: VoD Note: Traffic Object Detection for Autonomous Driving Fusing LiDAR and Pseudo 4D-Radar Under Bird’s-Eye-View (24’TITS) Link: paper Affiliation: Xi’an Jiaotong University (Yonghong Song) Dataset: Astyx Note: Fusing LiDAR and Radar with Pillars Attention for 3D Object Detection (24’International Symposium on Autonomous Systems (ISAS)) Link: paper Affiliation: Zhejiang University (Liang Liu) Dataset: VoD Note: RLNet: Adaptive Fusion of 4D Radar and Lidar for 3D Object Detection (24’ECCVW) Link: paper and reviews Affiliation: Zhejiang University (Zhiyu Xiang) Dataset: ZJUODset Note: LEROjD: Lidar Extended Radar-Only Object Detection (24’ECCV) Link: paper code Affiliation: TU Dortmund University (Patrick Palmer, Martin Krüger) Dataset: VoD Note: “Although lidar should not be used during inference, it can aid the training of radar-only object detectors.” V2X-R: Cooperative LiDAR-4D Radar Fusion for 3D Object Detection with Denoising Diffusion (24’arXiv) Link: paper code Affiliation: Xiamen University (Chenglu Wen) Dataset: V2X-R Note: baseline method of V2X-R Datasets Fusion of 4D Radar & RGB Camera RCFusion: Fusing 4-D Radar and Camera With Bird’s-Eye View Features for 3-D Object Detection (23’TIM) Link: paper Affiliation: Tongji University (Zhixiong Ma) Dataset: VoD, TJ4DRadSet Note: GRC-Net: Fusing GAT-Based 4D Radar and Camera for 3D Object Detection (23’SAE Technical Paper) Link: paper Affiliation: Beijing Institute of Technology (Lili Fan) Dataset: VoD Note: LXL: LiDAR Excluded Lean 3D Object DetectionWith 4D Imaging Radar and Camera Fusion (24’TIV) Link: paper Affiliation: Beihang University (Bing Zhu) Dataset: VoD, TJ4DRadSet Note: TL-4DRCF: A Two-Level 4-D Radar–Camera Fusion Method for Object Detection in Adverse Weather (24’IEEE Sensors Journal) Link: paper Affiliation: South China University of Technology (Kai Wu) Dataset: VoD Note: Beyond the VoD, the LiDAR point cloud and images of the VoD dataset are processed with artificial fog to obtain the VoD-Fog dataset for validating our model. UniBEVFusion: Unified Radar-Vision BEVFusion for 3D Object Detection (24’arXiv) Link: paper Affiliation: Xi’an Jiaotong - Liverpool University Dataset: VoD, TJ4DRadSet Note: RCBEVDet: Radar-camera Fusion in Bird’s Eye View for 3D Object Detection (24’CVPR) Link: paper Affiliation: Peking University (Yongtao Wang) Dataset: VoD Note: not only 4D mmWave Radar, but 3D Radar like Nuscenes MSSF: A 4D Radar and Camera Fusion Framework With Multi-Stage Sampling for 3D Object Detection in Autonomous Driving (24’arXiv) Link: paper Affiliation: University of Science andTechnology of China (Jun Liu) Dataset: VoD, TJ4DRadset Note: SGDet3D: Semantics and Geometry Fusion for 3D Object Detection Using 4D Radar and Camera (24’RA-L) Link: paper code Affiliation: Zhejiang University (Huiliang Shen) Dataset: VoD, TJ4DRadset Note: ERC-Fusion: Fusing Enhanced 4D Radar and Camera for 3D Object Detection (24’DTPI) Link: paper Affiliation: Beijing Institute of Technology (Lili Fan) Dataset: VoD Note: HGSFusion: Radar-Camera Fusion with Hybrid Generation and Synchronization for 3D Object Detection (25’AAAI) Link: paper code Affiliation: Southeast University (Yan Huang) Dataset: VoD, TJ4DRadSet Note: Others LiDAR-based All-weather 3D Object Detection via Prompting and Distilling 4D Radar (24’ECCV) Link: paper code (unfilled project) Affiliation: KAIST (Yujeong Chae) Dataset: K-Radar Note: Exploring Domain Shift on Radar-Based 3D Object Detection Amidst Diverse Environmental Conditions (24’ITSC) Link: paper Affiliation: Robert Bosch GmbH (Miao Zhang) Dataset: K-Radar, Bosch-Radar Note: Survey Papers 4D Millimeter-Wave Radar in Autonomous Driving: A Survey (23’arXiv) Link: paper Affiliation: Tsinghua University (Jianqiang Wang) 4D mmWave Radar for Autonomous Driving Perception: A Comprehensive Survey (24’TIV) Link: paper Affiliation: Beijing Institute of Technology (Lili Fan) A Survey of Deep Learning Based Radar and Vision Fusion for 3D Object Detection in Autonomous Driving (24’arXiv) waiting for updates……………….. Basic Knowledge What is 4D Radar? 3D object detection is able to obtain the position, size and orientation information of objects in 3D space, and is widely used in automatic driving perception, robot manipulation, and other applications. In 3D object detection, sensors such as LiDAR, RGB camera and depth camera are commonly used. In recent years, several works have been proposed to utilize 4D radar as a primary or secondary sensor to achieve 3D object detection. 4D radar, also known as 4D millimeter wave (mmWave) radar or 4D imaging radar. Compared to 3D radar, 4D radar not only obtains the distance, direction and relative velocity (Doppler velocity) of the target object, but also detects the height of the object. Due to its robustness against different weather conditions and lower cost, 4D radar is expected to replace low beam LiDAR in the future. This repo summarizes the 4D radar based 3D object detection methods and datasets. Different 4D Radar Data Representations PC: Point Cloud ADC: Analog-to-Digital Converter signal RT: Radar Tensor (include Range-Azimuth-Doppler Tensor, Range-Azimuth Tensor, Range-Doppler Tensor) Representative researchers Li Wang (Postdoctoral Fellow) and his co-leader Xinyu Zhang @Tsinghua University, authors of Dual Radar Bing Zhu @Beihang University Lin Yang @Shanghai Jiao Tong University Chris Xiaoxuan Lu @University College London (UCL) Zhixiong Ma @Chinese Institute for Brain Research (ex. Tongji University), the author of TJ4DRadSet Dataset and OmniHD-Scenes Dataset Zhiyu Xiang @Zhejiang University, the author of ZJUODset Dataset Yujeong Chae and his PhD Advisor Kuk-Jin Yoon @Korea Advanced Institute of Science and Technology (KAIST) Lili Fan @Beijing Institute of Technology Chenglu Wen @Xiamen university, the author of CMD Dataset and V2X-R Dataset