March 20, 2018 | Lawrence, Kansas
Accurate perception of the surroundings is critical for ADAS and L4/L5 autonomous vehicles. To ensure the safety of pedestrians as well as passengers, the vehicle sensor system must perform in all weather and lighting conditions, including rain, snow, darkness, fog/smog, dirt and more. Camera Radar Sensor Fusion is considered the pivotal means to achieve this goal and is often implemented along with other perception technologies such as LiDAR and ultra-sound to provide the redundancy and accuracy required for road safety.
The following demonstration is conducted using the long-range and midrange configurations of Ainstein’s K-77 radar along with a camera through our Radar and Camera Sensor Fusion Platform.
Pairs of radar data and images are taken nearly simultaneously. The radar generates SNR, range, velocity, and azimuth angle data for each detected target. The convolutional neural network (CNN) identifies objects within the camera images. The two data sets are cross-validated using the radar’s azimuth angle and the relative position of the CNN identified objects within the image. The platform is able to cross-validate to correct the missed or false detection by any sensor, while offering the rich information of the detected objects, including object type, distance, and velocity.
Ainstein’s Camera Radar Sensor Fusion Platform is designed for enhanced performance of Blind Spot Detection and Automatic Cruise Control as well as L4/L5 self-driving functions. We’re excited to bring these technologies to more vehicles.