Introduction to Sensor Fusion for Self Driving Cars

A Basic introduction to RADAR and LIDAR along with their strengths and weaknesses

We played a lot with images and camera images in the previous medium articles. Check for those in the references section. As you might imagine, the eyes for the machine are as important as our own eyes when we see the world. But we don’t just have eyes, we have noses, we have ears, we have skin, we have built-in sensors that can measure the deflection of our muscles, or we can understand the gravity points through our ears. In this medium article, we will understand what is RADAR and LIDAR. Further, understand their role in Self Driving Car development 😀

Photo by on


There’s sensors called radar, lidar or laser, and many other sensors in a self driving car. Why say more than one where it’s completely obvious like our eyes can’t smell and our nose can’t hear and these different types of information are still important to live human life. The same is true for car. So, lidar is able to see distances which eyes have a hard time with, and radar can see sometimes through fog where visible lights can’t go through. So, now we’re going to dive in and talk about two sensors, talking about the math behind it and eventually work on integrating sources from multiple sensors into one singular world picture.

RADAR Strengths and Weaknesses

RADAR stands for Radio Detection And Ranging. There are two stereo cameras located & fixed on the self driving car. The two cameras together, work just like our eyes to form a stereo image that allows us to gather both image data, as well as distance information. There is also a camera for the recognition of traffic signals. Often, traffic signals can be located on the other side of an intersection, so usually there is a special lens to give the camera sufficient range to detect the signal from far away. Also, there is a radar, and it sits behind the front bumper. Radars have in automobiles for years. We can find them in systems like adaptive cruise control, blind spot warning, collision warning and collision avoidance. Doppler effec is used by radar to measure the speed directly.

The Doppler effect is based on the fact that whether the object is moving away from us or toward us. Also, it measures the change in frequency of the waves.

This is kind of like how a fire engine siren will sound differently depending on whether the fire engine is moving away from us or toward us. Because radar waves bounce off hard surfaces, they can provide measurements to objects without direct line of flight. Radars can see below vehicles, spot buildings and objects that might be hidden from the field of view. Radar technology is the least affected by weather conditions like rain and fog.

Radars have a low resolution in the vertical direction as compared to the cameras. The lower resolution can cause problems related to reflections.

LIDAR Strengths and Weaknesses

LIDAR stands for Light Detection and Ranging. Unlike RADAR, which uses radio waves, an infrared laser beam is used to determine the distance between the sensor and a nearby object. Most current LIDARs use light in the 900 nanometer wavelength range, although some LIDARs use longer wavelengths, which perform better in rain and fog. A rotating device keeps scanning the field of view regularly.

The lasers are pulsed, and those pulses are reflected by nearby objects. These reflections return a point cloud. This point cloud represents these objects. LIDAR has a higher spatial resolution. More concentrated laser beam, large number of scan layers and that too in the vertical direction, also taking into account the high density of LIDAR points per layer, all these factors contribute to the high spatial resolution of LIDAR. The technology used in LIDAR today cannot measure directly the speed of objects. They are affected by weather conditions like by dirt on the sensor, which requires keeping the LIDAR clean.


  1. Computer Vision Fundamentals — Self Driving Cars (Finding Lane Lines)

2. Introduction to Neural Networks For Self Driving Cars (Foundational Concepts Part — 1)

3. Introduction to Neural Networks For Self Driving Cars (Foundational Concepts Part — 2)

4. Introduction to Deep Learning for Self Driving Cars (Part — 1)

5. Introduction to Deep Learning for Self Driving Cars (Part — 2)

6. Introduction to Convolutional Neural Networks for Self Driving Cars

7. Introduction to Keras & Transfer Learning for Self Driving Cars

8. Introduction to Localization for Self Driving Cars

9. Introduction to Functional Safety for Self Driving Cars

10. Introduction to Safety Management for Self Driving Cars

11. Functional Safety Hazard Analysis and Risk Assessment for Self Driving Cars

12. Functional Safety Concept for Self Driving Cars

AI Engineer at DPS, Germany | 1 Day Intern @Lenovo | Explore ML Facilitator at Google | HackWithInfy Finalist’19 at Infosys | GCI Mentor @TensorFlow | MAIT, IPU