In the field of autonomous driving, 3D object detection plays a crucial role as a key perception module. Radar-vision fusion based object detection refers to the technology of combining radar data and visual data to detect and recognize targets. However, in practical situations, visual data may encounter a series of problems during the data collection process, such as limited field of view, lighting variations, motion blur, etc. To address these issues, in addition to the commonly used techniques for onboard cameras, such as dynamic exposure control and low-light enhancement, this paper proposes a novel radar-vision fusion based object detection framework based on CenterFusion. The framework focuses on the case of dealing with abnormal visual data, and aims to achieve more reliable target detection under various complex environmental conditions by fully exploiting the complementary nature of radar and visual data, and introducing a point cloud feature extraction module and a modal attention mechanism. Finally, comparative experiments are conducted on the nuScenes-mini under different conditions, and the experimental results show that the method proposed in this paper can completely replace CenterFusion in various situations, demonstrating excellent performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.