I recently added support for convolution integrals on the Function Graph object in the editor. I thought I’d take the opportunity to write a quick introduction on the topic and present a few interactive diagrams. As with previous posts, this is not meant to be a complete teaching resource on convolution integrals. There are many excellent resources out there already! It’s just another take on them with some, hopefully interesting, diagrams.

Note, if you already know all about convolution integrals and just want to see some nice interactive diagrams, just scroll down to here!

Wikipedia has a good page on convolution integrals which covers some of their mathematical background. I personally found that I understand them much better in the context of 2D images. Let’s work backwards a little and assume you want to blur a 2D image. As a graphics programmer you might write a pixel shader that samples a few texels around the position you’re writing to, average them, and output the resulting color. You’d then run this shader over ever pixel of your target image. Intuitively, you’d understand that the wider your sampling pattern, the blurrier the resulting image would be.

Let’s try and turn this into pseudo-code.

For each output pixel:
Sample a bunch of texels from image.
Average them.
Write to output.

Those ‘bunch of texels’ aren’t just anywhere in the source image though. They are in the vicinity of the pixel we’re rendering. Let’s amend the pseudo-code.

For each output pixel p:
Sample all the texels that are in the vicinity of p.
Average them.
Write to output.

What if we were to clarify what ‘in the vicinity of p’ meant? Let’s express it as those texels that are within a certain distance threshold from p.

For each output pixel p:
Sample all the texels whose distance from p is less than threshold.
Average them.
Write to output.

Ok, how do we find all the texels that are within a certain distance of p? Let’s pretend that performance isn’t an issue and that reading every texel on the source image is free. We might then write code like this:

For each output pixel p:

average = 0;

For each input texel t:
if (distance(t,p)<=threshold)
average += image(t);

Average them.

Write to output.

Based on the logic above we make a binary decision to either include or exclude a texel in our calculation. If it’s within our distance threshold we include it, other we don’t. What if we were to express that as a multiplier? We’ll multiply by 1 if we want to include the texel, and by 0 if we want to exclude it.

For each output pixel p:

average = 0;

For each input texel t:
if (distance(t,p)<=threshold)
average += image(t) * 1;
else
average += image(t) * 0;

Average them.

Write to output.

Let’s start calling this mutliplier a ‘weight’. Our algorithm works out the weight of each texel, i.e. how much it contributes towards that average.

For each output pixel p:

average = 0;

For each input texel t:
weight = 0;
if (distance(t,p)<=threshold)
weight = 1;
else
weight = 0;

average += image(t) * weight;

Average them.

Write to output.

Now, let’s take a look at that ‘Average them’ statement. The blur operation takes in contributions from many texels and produces a single color. As long as the contibution from each (participating) texel is the same, then combining them together simply means taking their average. Since participating texels have a weight of 1 and non-participating a weight of zero, then simply adding up the weights will give us the number of participating texels.

For each output pixel p:

average = 0;
totalWeight = 0;

For each input texel t:
weight = 0;
if (distance(t,p)<=threshold)
weight = 1;
else
weight = 0;

average += image(t) * weight;
totalWeight += weight;

average /= totalWeight;

Write to output.

Expressing things via weight and totalWeight allows for an important mental leap. Not all contributions have to be the same. Each texel may contribute a different amount towards the final pixel. As long as we add up all the weights and divide the sum of the weighted texels by the sum of the contributions then we’ll a (mathematically) correct answer. This is known as weighted average.

Let’s rearrange the above pseudo-code a little to decouple the averaging process to the weight calculation.

function TexelWeight(delta)
if (delta<=threshold)
return 1;
else
return 0;

For each output pixel p:

average = 0;
totalWeight = 0;

For each input texel t:
weight = TexelWeight(distance(t,p));
average += image(t) * weight;
totalWeight += weight;

average /= totalWeight;

Write to output.

The averaging process hold true regardless of the implementation of the TexelWeight function. We just need that function to return the contribution weight, given the distance of the texel being considered away from the pixel being rendered.

The input to the weight function is important. Rather than passing t and p, we’re passing the (signed) distance between them. We’ll visualize this in a bit, but this effectively means that the weight function moves/slides across the domain of the input function. Passing a distance of zero means we’re asking for the contribution of the texel at the same location as the pixel we’re writing to. Passing distances away from zero means we’re asking for the contibution factors of the texels away from the pixel being shaded.

If we think of TexelWeight as more of mathematical rather than programming function, we should be able to plot its graph.

We can do this in the editor. Just add a Function Graph object and set the function code to this:
(or just right click on the diagram area and select Edit Local Copy!)

return Math.abs(x)<1 ? 1 : 0;

This type of filtering function is called a box filter and it’s characterised by the fact that all contributing samples have the same weight, i.e. it’s a straight average. Note that in the above formulation we normalize the sum by dividing by the sum of the weights. You’ll often see filtering functions (or ‘kernels’) where the weights have already been normalised so that they add up to 1.

The Box, Tent, and Gaussian filters are three of the most commonly used filters.

Notice how all these filtering functions are centered around the x=0 axis. That’s because the filtering function is applied over the entire range where the base function is defined, ‘grabbing’ nearby values and doing a weighted average to produce a set of output values.

Let’s see this in action! On the diagram below we have the base function at the top, the filtering function in the middle and the convolution integral at the bottom row. The convolution process ‘fetches’ values from the base function, multiplies each value by the value of the filter function and finally adds all these results together. This process happens once for each point on the resulting function. Notice how the filter function ‘slides’ across the range of the base function, combining multiple input values into a single output.

I’ll close things off with some more examples. In the diagram below you can see how a random noise input function is smoothed out by the various filtering kernels. You can try out different filters, but also vary the width of each. The wider the kernel, the smoother the resulting graph becomes.

All this of course applies to polar graphs as well where things are more interesting and more relevant to graphics programming.

For a more practical example of how convolution integrals can be used in computer graphics check out the post on cubemap filtering!