Calibrate Camera Position using Simplified Genetic Algorithm

Speaking of robotics, in order for a robot to move precisely in the real world, it is essential yet very difficult to set the position and the angle of sensing devices, such as cameras. For this problem, calibration is commonly used, setting the camera’s position and the angle as parameters in a program. But now we have another problem. The difficulty in setting up the camera precisely also means that knowing the camera’s position is also difficult. And the parameter you should know is not only the position but also the angle and the optical features such as FOV(Field Of View) which sometimes isn’t provided by the manufacturer of the sensing device.

Calibration using genetic algorithm

One way to tackle this problem is to use the captured image to assume parameters. And calibration using the genetic algorithm can be one of the easiest solutions.

Let’s say we have a sheet of paper in which a grid pattern is printed, and the camera is seeing it.

We’re reading y-coordinates which the bottom, middle, and top pixel in the image refer to respectively. These values will be used as ground truth for the genetic algorithm.

Just for a sidenote, a simplified diagram of this setup is shown as bellow, which is mathematically easy to describe using tangent equation. The constant we want to know is theta_start, theta_fov, and z.

The greatest advantage of the genetic algorithm is that all you have to do is just to put down the mathematical equation and to give the program the ground truth and then run the program to get the optimal values. Moreover, if the loss of each iteration can be calculated
somehow, the genetic algorithm is likely to work for that problem. So it’s flexible way to search the solutions. Each iteration will evaluate and quantify the fitness of the constants. This is done with the loss function, and least-squares are proper in many cases.

Because the mathematical model, in this case, is continuous, the optimization goes pretty straightforward. If the model complex, you may have to deal with local minima issues.
Otherwise technique required for this optimazation is tweaking the parameters, not using crossover operations.

The code is shown below.

import math, random

def calc_loss(theta_start_in, theta_camera_in, z_in, y_start_in, fac):
    global params

    rand0 = fac * (random.random() - 0.5)
    rand1 = fac * (random.random() - 0.5)
    rand2 = fac * (random.random() - 0.5)
    rand3 = fac * (random.random() - 0.5)
    theta_start = theta_start_in + rand0
    theta_camera = theta_camera_in + rand1
    z = z_in + rand2
    y_start = y_start_in + rand3

    y1 = y_start + z * math.tan(math.radians(theta_start + theta_camera))
    y2 = y_start + z * math.tan(math.radians(theta_start + 0.5 * theta_camera))
    y3 = y_start + z * math.tan(math.radians(theta_start))
    y1_true = params['y1_true']
    y2_true = params['y2_true']
    y3_true = params['y3_true']
    loss = math.pow(y1 - y1_true, 2) + math.pow(y2 - y2_true, 2) + math.pow(y3 - y3_true, 2)
    return loss, theta_start, theta_camera, z, y_start

class Param():
    def __init__(self, loss, theta_start, theta_camera, z, y_start):
        self.loss = loss
        self.theta_start = theta_start
        self.theta_camera = theta_camera
        self.z = z
        self.y_start = y_start

    def __lt__(self, other):
        # self < other
        return self.loss < other.loss

# starter params
theta_start = 1.0
theta_camera = 1.0
z = 1.0
y_start = 1.0

# start learning
n_gen = 500
n_epoch = 2000
for i in range(n_epoch):

    children = []
    for j in range(n_gen):
        factor = 0.01 if j > 0 else 0

        loss_ret, theta_start_ret, theta_camera_ret, z_ret, y_start_ret = calc_loss(theta_start, theta_camera, z, y_start, factor)
        child = Param(loss_ret, theta_start_ret, theta_camera_ret, z_ret, y_start_ret)

    # 0th is the best
    children = sorted(children)
    theta_start, theta_camera, z, y_start = children[0].theta_start, children[0].theta_camera, children[0].z, children[0].y_start
    if i == 0:
        print ''
        print 'worst child: loss: {}'.format(children[len(children)-1].loss)
    if i % 500 == 0:
        print 'epoch: {}, loss: {}'.format(i, children[0].loss)

# results
best = children[0]
params['theta_start'] = best.theta_start
params['theta_camera'] = best.theta_camera
params['z'] = best.z
params['y_start'] = best.y_start
data = json.dumps(params, indent=4)
fn = 'camera_params.json'
with open(fn, 'w') as f:

print '\nlearned params:'
for k,v in params.items():
    print '{}: {}'.format(k, v)

In each iteration, the best individual, which of cource marked the least loss, is to be picked and used as a parent of the next iteration. The loss value after 2,000 epochs with 500 individuals eventually turned out to be nearly zero. And we got the optimal solutions for this problem.