Images in Processing

Creative Computing

Images in Processing.

Marco Gillies and Mick Grierson

Overview and aims

By the end of the lecture you will be able to:

  • Explain the representation of images in Processing
  • Explain Convolution and the how image filters work
  • Load and display an image in Processing
  • Apply a built in filter to an image
  • Implement your own filters

Loading and Displaying an image in Processing

// create a variable to store an image
PImage image1;
 
void setup()
{
  size(500, 500);
 
  // load the image from a file
 image1 = loadImage("11_original.jpg"); 
 
 // display the image, at position 0, 0  
 image(image1, 0,0);
}

Load and display an image in Processing

PImage is a class for reprsenting images, see the references for more details

http://processing.org/reference/PImage.html

loadImage loads an image from file

image displays it to the screen, the last two arguments are the position

data folder

You should put your image in your sketches data folder. This is a folder called “data” inside your main sketch folder. If you don’t put your image here you will have to specify a complete path, also exporting to a web applet won’t work.

You can put stuff in your data folder from the processing environment by doing Sketch->Add File

Filters

  • Filters are a powerful images processing technique
  • They can act to change the appearance of a whole image
PImage b;
b = loadImage("topanga.jpg");
image(b, 0, 0);
filter(THRESHOLD);
Processing has a built in method filter that performs a number of filtering operations:

http://processing.org/reference/filter_.html

The previous bit of code perfmorms the filter on the screen after you have drawn the image to it.

You can also perform a filter on an image directly:

PImage b;
b = loadImage("topanga.jpg");
b.filter(THRESHOLD);
image(b, 0, 0);
This is fine if you want to use one of the existing function, but what if you want a different filter? You have to implement your own.

Overview and aims

By the end of the lecture you will be able to:

  • Explain the representation of images in Processing
  • Explain Convolution and the how image filters work
  • Load and display an image in Processing
  • Apply a built in filter to an image
  • Implement your own filters

Filters

  • A filter steps through pixels one by one
  • Alters each pixel in some way
Typically a filter will be some sort of mathematical operation.

  • Divide all colour values by 2 to make them darker
  • Add something to the red channel to give the image a red tint
  • Threshold: set a colour to white if it is above a certain value, black otherwise

To do this you need to use a loop.

// loop over all the pixels of the image
  for (int x = 0; x < image1.width; x++)
  {
    for (int y = 0; y < image1.height; y ++)
     {
      // get the colour of the pixel at x, y
      color pixel = image1.get(x,y);

      pixel = color(int(red(pixel)*1.4), int(green(pixel)), int(blue(pixel)));

      // set the colour of the pixel to the new value
      stroke(pixel);
      point(x,y);
     }
  }
This code draws the result directly to the screen, the next one applies it back to the image:
// loop over all the pixels of the image
  for (int x = 0; x < image1.width; x++)
  {
    for (int y = 0; y < image1.height; y ++)
     {
      // get the colour of the pixel at x, y
      color pixel = image1.get(x,y);

      pixel = color(int(red(pixel)*1.4), int(green(pixel)), int(blue(pixel)));

      // set the colour of the pixel to the new value
      image1.set(x,y,pixel);
     }
  }

This is slow!

We are accessing each pixel individually, and because there are a lot of pixels, we are having to do a lot of calculations. High level methods like PImage.get and point are fine if you are only dealing with a couple of pixels of the image, but get slow if you are accessing all of them. We need to directly get hold of the pixels of the image

Images are stored as an array of pixel values. Though we think of an image as 2D, the pixels are stored in a single 1D array. The entire first row is stored followed by the entire second row, etc.

This can be handy as we only need 1 loop much of the time.

// load pixels into the pixels array
 image1.loadPixels();
 
 for (int i = 0; i < image1.width*image1.height; i++)
 {
   color col = image1.pixels[i];
   // mess with the colour value
   image1.pixels[i] = color(255-red(col), green(col), blue(col));
 }
image1.updatePixels();
image(image1,0,0);

Here we step through the array pixel by pixel. The total size of the array is the width of the image times the height, so that is the bounds we use in our for loop.

image1.loadPixels() loads the pixels of the image into the image1.pixels array so you can edit them. image1.updatePixels() writes back the edited pixels

There are equivalent methods loadPixels(), updatePixels() and a pixels array for the screen so you can write directly to the screen.

Sometimes we need to access the array by its x and y value.
All you need to know is the correct way to index into the array.
The pixels are stored row by row, so you need to first get to the current row.
This means y times the row width, then you need to get the horizontal, which is
just x pixels.

 // loop over all pixels in the pixels array
 for (int x = 0; x < image1.width; x++)
   for (int y = 0; y < image1.height; y++)
     {
       // get the colour value of the pixel at x,y
       // use the formula below to go from a 2D position
       // to a 1D array index
       color col = image1.pixels[x+y*width];
       //do something
       }
    }

Neighbourhood filters

  • Many filters use near by pixels to choose the new value of a pixel
  • Typically a sqaure of pixels surrounding the one we want
  • Called a neighbourhood

Convolution

Neighbourhood filters use a method called convolution.
You may have encountered convolution in audio processing

When processing an audio signal, a convolution adds previous (neighbouring)
samples to the current sample in a specific pattern that determines the filter
type. The pattern is given by a number of coefficients that are multiplied by
the previous signal before adding them to the result

For an image a convolution consists of adding the values of a pixels
neighbours (both before and after) to it in a specific pattern. The pattern is
still given by a set of coefficients, but these coefficients are arranged in a
2D square array centred on the current pixel. This array is called a kernel.

You "place" the kernel onto the image, centred on the current pixel

output = k[-1,-1]*i[-1,-1] + k[0,-1]*i[0,-1] + k[1,-1]*i[1,-1] + ...

You then multiply each coefficient of the kernel by the image pixel it
is "on top" of and then add the results together to get the output pixel.

You then move the kernel on to the next pixel, and so on

color convolution(int x, int y, float[][] kernel, int kernelsize, PImage img) {
  float rtotal = 0.0;
  float gtotal = 0.0;
  float btotal = 0.0;
  int offset = kernelsize / 2;
 
  // Loop through convolution matrix
  for (int i = 0; i < kernelsize; i++ ) {
    for (int j = 0; j < kernelsize; j++ ) {
 
      // What pixel are we testing
      int xloc = x + i-offset;
      int yloc = y + j-offset;
      int loc = xloc + img.width*yloc;
 
      // Make sure we haven't walked off the edge of the pixel array
      // It is often good when looking at neighboring pixels to make sure we have not gone off the edge of the pixel array by accident.
      loc = constrain(loc,0,img.pixels.length-1);
 
      // Calculate the convolution
      // We sum all the neighboring pixels multiplied by the values in the convolution matrix.
      rtotal += (red(img.pixels[loc]) * kernel[i][j]);
      gtotal += (green(img.pixels[loc]) * kernel[i][j]);
      btotal += (blue(img.pixels[loc]) * kernel[i][j]);
    }
  }
 
  // Make sure RGB is within range
  rtotal = constrain(rtotal,0,255);
  gtotal = constrain(gtotal,0,255);
  btotal = constrain(btotal,0,255);
 
  // Return the resulting color
  return color(rtotal,gtotal,btotal); 
}

This is based on Daniel Shiffman's code

It is a convolution function that you have to call for all pixels of your image.

The arguments are the position of the pixel you are filtering, the kernel (a 2D float array), its size and the image you are filtering

It loops through the pixels of the kernel, getting the equivalent neighbouring pixels of the image and multiplying them by the kernel. The results from all the pixels of the kernel are added together and written to the pixel (x,y) of the image.

Note what happens at the edge of the image.

Example kernels

Example kernels

Sharpen

float[][] kernel = { { -1, -1, -1 } , 
                     { -1, 9, -1 } ,
                     { -1, -1, -1 } } ;

Blur

float[][] kernel = { { 1.0/9, 1.0/9, 1.0/9 } , 
                     { 1.0/9, 1.0/9, 1.0/9 } ,
                     { 1.0/9, 1.0/9, 1.0/9 } } ;

Gaussian Blur

float[][] kernel = { { 1.0/16, 2.0/16, 1.0/16 } , 
                            { 2.0/16, 4.0/16, 2.0/16 } ,
                            { 1.0/16, 2.0/16, 1.0/16 } } ;

Edge Detect (horizontal)

float[][] kernel = { { -1,  0, 1 } , 
                     { -2,  0, 2 } ,
                     { -1,  0, 1 } } ;

Edge Detect (vertical)

float[][] kernel = { { -1,  -2, -1 } , 
                     {  0,   0,  0 } ,
                     {  1,   2,  1 } } ;

Video

Processing has a built in library for handling video, see the reference:

http://processing.org/reference/libraries/video/index.html

It includes 2 classes: Capture, for getting live video from a camera, and Movie for movie (quicktime) files. The great thing is that both of these classes inherit from PImage, so anything you can do with a PImage you can do with video, so try all of the things I've showed you with video.

Overview and aims

By the end of the lecture you will be able to:

  • Explain the representation of images in Processing
  • Explain Convolution and the how image filters work
  • Load and display an image in Processing
  • Apply a built in filter to an image
  • Implement your own filters
PDF Printer    Send article as PDF to