arrow_back Back home
21 December 2022

Building a particle filter

For the autonomous driving project that I lead, localization has been a consistent thorn in our side. We’ve tried NDT, ICP, semantic nudging, GNSS-IMU fusion… But there’s an obvious gap in our efforts, namely with what I’ll call Bayesian localizers. They come in many forms: histogram filters, Markov filters, and so on. From what I can determine, they are all quite similar, and their most popular manifestation is the Monte Carlo filter (particle filter), first proposed by Thrun, Fox, Burgard, and Dellaert in 2001.

About particle filters

Typically, state estimation techniques provide a single estimate at a time. When you open the GPS map on your phone, you see a single blue dot with your position. Particle filters (PFs) take another approach: they fill the map with hundreds, even thousands, of dots, each one representing a possible state. When the filter first starts, these dots are spread out, reflecting the initial uncertainty of the particle filter’s estimate. Over time, these countless dots converge onto a single location.

How? How does a particle filter know where to shift the dots, gradually nudging them into a consensus? Particle filters leverage a set of Monte Carlo algorithms, which are statistical functions that determine how likely a given estimate is to be true. These Monte Carlo algorithms assign a likelihood to each dot, and the dots converge around members with high likelihoods. The process happens iteratively, and it’s why particle filters are sometimes called sequential Monte Carlo filters.

Modeling probability

PFs are part of the broad field of Bayesian statistics, the math concerned with the calculation of belief. This focus on belief makes Bayesian methods a great tool for robot localization. After all, our sense of where we are is simply a belief, a best guess given our observations. Bayesian methods in turn revolve around Bayes’ theorem, which states:

\[\begin{equation} \label{bayes} P(A\mid B)=\frac{P(B\mid A)P(A)}{P(B)} \end{equation}\]

In other words, the probability that A is true, given that B is true, is equal to the probability of the inverse, multiplied by the ratio of \(P(A)/P(B)\). We can apply (1) to form a posterior probability.

PFs are Markovian: the likelihood assigned to a particle only depends on its previous likelihood, and nothing before that. Note that (2+=4)

Monte Carlo Localization algorithm

Monte Carlo Localization (MCL)

This was adapted from Daniel Lu’s excellent Wikipedia article.

class Particle:
    state: pose
    belief: float

def mcl(previous_particles, actuation_command, sensor_data):
    unfiltered_particles = []
    filtered_particles = []
    beliefs = []

    # Iterate over all particles in the previous set, updating them
    # with the latest sensor data and actuation command
    for prev_particle in previous_particles:
        new_state = motion_update(actuation_command, prev_particle.state)
        new_belief = sensor_update(sensor_data, new_state)
        particles.append(Particle(state, belief))

    # Based on their updates beliefs, choose new particles with a
    # bias toward likely particles.
    for particle in unfiltered_particles:
        draw particle from unfiltered_particles with probability particle.belief
        filtered_particles.append(particle)

    return filtered_particles

In the motion update, each particle (dot) is displaced based on the actuation command. Given a prior understanding of the vehicle’s dynamics (including its latest acceleration, throttle maps, etc), we can effectively calculate a likely displacement given an abstract actuation command \((T, \theta)\), where \(T\) is a throttle value from -1 (reverse) to +1 (full ahead) and \(\theta\) is the mean angle of the Ackermann wheel.

In the sensor update, for each particle the robot compares its current observations with the qualities of each particle. For example, if the robot sees a stop sign, yet a given particle is known to have no stop sign nearby, then that particle will be given a low likelihood.

Helpful resources

Kalman and Bayesian Filters in Python