Introduction to Localization for Self Driving Cars

A better intuition to localization concept in autonomous driving!

Prateek Sawhney
4 min readSep 12, 2021

In the previous medium article, we covered sensor fusion and how to combine
different sensor readings like lidar and radar data with the help of some introduction. In this medium article, we will learn about localization. Localization is what allows an autonomous car to know precisely where it is. It’s an amazing topic, and I love it 😍

Introduction to Localization (Image by author)

And of course, localization is really important to drive autonomously. Without localization it would be impossible for a self-driving car to drive safely. So, what is localization?

Localization Intuition

Conceptually, localization is pretty straight forward. A robot takes information about its environment and compares that to information it already knows about the real world. Humans do something similar. Imagine you were suddenly kidnapped and blindfolded. You are stuffed into a
car that drove for hours. You would have no idea at what place you were. Then, the blindfold were removed and you something like this 😃

Photo by Abuzar Xheikh on Unsplash

Now, what would you say if you were asked, where are you?

Explanation

If you recognized this as the Eiffel Tower, then you will probably say something like Paris or France. This may not seem very impressive, but it’s actually remarkable. Before the blindfold was removed, you had zero understanding of where you were in the world. You could have made a
guess, but you would have had no idea if you were right. But after being shown a tiny amount of data, or a single image, there is a decrease in uncertainty to a few kilometers radius. This is the main intuition behind localization.

A robot gathers compares the information about its current environment with a known map to understand where it is.

Localizing a Self Driving Car

We will start by assuming that we are in a car that is totally lost, which means the driver of the car, has no clue where the car is. Now we assume that we have a global environment map. Now, generally speaking, localization answers a question, where is our car in a given map with a high accuracy?

And high accuracy means between three and 10 centimeters. In a default way, we can find the car in the map by using the global navigation satellite systems. Also, GPS system is not so precise. Most of the time, GPS has an accuracy of the worth of a lane — about one to three meters. But sometimes it can be as broad as 10 to 50 meters. Clearly this is not appropriate for an autonomous vehicle. So we can’t trust GPS. And you have to find another technique to localize yourself inside a given map. It is common practice to use the onboard sensor data, along with our global map, to solve the localization issue. So with the onboard sensors it is possible to measure distances to static obstacles, like trees, poles, or walls. All the distances w.r.t the nearby static objects is measured in the local coordinate system of the car.

Now when you’re lucky, the same obstacles that were observed by the onboard sensors are also part of the map. And, of course, the map has its
own global coordinate system. The observations must be matched with the map information to estimate where the car is located on the map. And when the process is followed correctly, this results in a transformation between the local car coordinate system and the global coordinate system of the map. This transformation should be as accurate as possible — let’s say within a range of 10 centimeters or less. If you are able to estimate this transformation, you solve the localization issue.

So after this example, we can summarize.

  1. First, localization answers the question of where the car is in a given
    map within an accuracy of 10 centimeters or less.
  2. And second, onboard sensors are used to estimate the transformation
    between local measurements and a given map.

--

--

Prateek Sawhney

AI Engineer at DPS, Germany | 1 Day Intern @Lenovo | Explore ML Facilitator at Google | HackWithInfy Finalist’19 at Infosys | GCI Mentor @TensorFlow | MAIT, IPU