The existing 3D target detection network based on feature layer fusion of multi-view images and lidar point cloud fusion is mostly fused by directly splicing the multi-sensor features output by the backbone or the BEV features under the unified perspective of the two modalities. The features obtained by this method will be affected by the original data feature modality conversion and multi-sensor feature fusion`s effect. Aiming at this problem, a 3D object detection network based on feature fusion based on channel attention is proposed to improve the feature aggregation ability of BEV feature fusion, thereby improving the representation ability of the fused features. The experimental results on the nuScenes open source dataset show that compared with the baseline network, the overall feature grasp of the object is increased, and the average orientation error and average speed error are reduced by 4.9% and 4.0%, respectively. In the process of automatic driving, It can improve the vehicle's ability to perceive moving obstacles on the road, which has certain practical value.
KEYWORDS: Cameras, Image fusion, Head, Feature extraction, 3D image processing, Image segmentation, 3D modeling, LIDAR, Information fusion, Data modeling
The existing 3D object detection networks based on the cross-attention mechanism from the bird's-eye view perspective mostly use the parameters calibrated by the camera to take the features corresponding to project each 3D feature point from the front view feature by projecting the 3D point onto the two-dimensional image. The features obtained by this method are affected by the calibration parameters and the projection position. To reduce the influence of the camera calibration parameters and the accuracy of the projection position on the obtained features, a local feature fusion network is proposed, which fuses the features within a certain range near the projection point to represent the corresponding features of the 3D point, thus improving the feature representation capability. The experimental results on the nuScenes val set show that the NDS value is increased by 1.4% compared with the benchmark. In addition, the network has robustness. Under the same noise, its accuracy reduction rate is reduced by more than 56% compared with the benchmark and has a practical value in automatic driving.
This article presents a data-driven control algorithm for automotive electronic control opening and closing system. By collecting the data of various mixed complex working conditions, the automatic decision-making model is formed on the basis of data learning, which can reduce the design and research of the algorithm itself and improve the intelligence degree of ECU. The untrained data can be collected at any time to learn to get the new model, and the original model can be quickly replaced. According to a large number of data collected in complex mixed conditions, the control method has better adaptability, simplifies the electronic control algorithm, and has certain generality in the field of automotive electronics. This article takes the anti-pinch control of power lift door as an example, proposes a data-driven BP neural network anti-pinch force classification method, selects good data for network training, and the recognition accuracy reaches 99.7%. Based on DSPACE, the rapid prototype bench test results show that the anti-pinch forces are in the range of 70N-100N, which verifies the feasibility of the control method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.