HD Movement Tracking: further and final iteration

Well, the end of year show has come and gone, and all that remains is the write up. Here’s a quick run down of the work that I showed and some of the development that went into it. I’ll also show the code I cobbled together from other peoples’ code wrote to do it. If you’ve not seen it already, you might want to take a look at the first and second posts that show the earlier stages. Done? Onwards!

To recap slightly, the first step is to compare two adjacent frames to identify pixels that have changed. The algorithm I used for that was taken directly from the frame differencing example that comes in the Processing examples. Then we need to threshold that, so anything that has changed is white and anything that hasn’t is black. Here’s a single frame with this process applied:
10
Here’s a version (in opposite colours) where movement leaves a trail over several frames:
11
Once we have a nice clean monochrome image we can run BlobScanner which will identify any large blocks of pixels, and calculate their centroids and bounding boxes. The centroid coordinates are fed to the Mesh library which calculates and draws a Delaunay triangulation using them, which gives us a rough outline of the identified movement:
separationVoronoi2_0017
separationVoronoi2_0140

Now, the original plan was to get some big (A1+ size) prints made, so I tried some simple black and white tests. This samples every other frame, IIRC, and the frames fade over time by simply placing a slightly transparent rectangle the size of the screen each frame. It feels a bit fast:


I couldn’t settle on a good way to display lots of frames in one print, so I scrapped the idea of doing just prints and looked at the video again to see what could be improved. Sampling colours from an image is one of my favourite techniques for natural looking colour palettes, so each line samples the colour from the pixel of the original image it starts at. OpenGL additive blending makes it sparkle a bit more, especially where a lot of lines cluster together. Like this:
separationVoronoi4_2697
separationVoronoi4_2684

Here’s the video. I added the music specially for the online version- at the exhibition it ran silently on a loop.

I didn’t give up on print entirely though- I quickly hacked in PDF recording to my sketch and fired some A3 prints out on my home printer. Using cartridge paper gives them a lovely delicate texture. Here are a pair of them rendered from Acrobat as images; I’m slightly baffled about how the Processing PDF renderer deals with colour but they worked out pretty well, and got some very favourable comments from those who attended the show:
output1
output2

So there we are. I’m pretty happy with how this project has worked out: I’ve learned a lot, created something beautiful (IMO at least) and gotten some good feedback about it too. This was the final unit of my college course, and it seems like a fitting end to what has been an excellent couple of years for me.

As a final gift, here’s the Processing code. You’ll need the BlobScanner and Mesh libraries (linked above) to make it run. As an aside, I had trouble getting the Processing video libraries to work on my system; rather than fix the issue I just used an image sequence instead but it shouldn’t be hard to adapt it back to work with video. I’m under no illusions about the quality of my code here- it’s not quick, and a better coder than me could probably find a number of ways to improve it. It did what I needed it to do, and I’m putting it out there in the hope that it will be useful to someone else. I’ve tried to comment it reasonably well, but please get in touch if you have any questions or criticisms. Cheers!

Hmm, WordPress has mangled the bit shifting section despite the source code tags. Have a look at the Processing examples if you need to check that bit, I guess.

import megamu.mesh.*;
import processing.pdf.*;
import Blobscanner.*;
import processing.opengl.*;
import javax.media.opengl.*;

PGraphicsOpenGL pgl;
GL gl;

PImage img1; //input images
PImage output; //processed for blob detection
Detector bd;
Delaunay d;

float boxAlpha=35;
float lineAlpha=55;

color bg= color(255);

boolean fade= false; 
boolean glBlend= false;
int index=1500; //frame number to start at
int pdfCount= 1;
int[] prevFrame;
int numPixels;
int interval= 5; //number of input frames to progress each cycle
float[][] pts;

String filepath= "THE FOLDER WHERE YOUR IMAGES ARE";

void setup() {
  size(1280, 720, OPENGL);
  smooth();
  bd= new Detector(this, 0, 0, width, height, 255);
  numPixels= width*height;
  prevFrame= new int[numPixels];
  output= loadImage(filepath+"0000.png");
  background(bg);
  strokeWeight(0.1);
  noFill();
}

void draw() {
  if (fade) {
    fill(bg, 10);
    rect(0, 0, width, height);
  }
  else {
    //background(bg);
  }
  if (glBlend) {
    //set up additive blending
    pgl= (PGraphicsOpenGL) g;
    gl=pgl.gl;
    pgl.beginGL();
    gl.glDisable(GL.GL_DEPTH_TEST);
    gl.glEnable(GL.GL_BLEND);
    gl.glBlendFunc(GL.GL_SRC_ALPHA, GL.GL_ONE); 
    pgl.endGL();
  }
  println(index);

  analyse(); //identify changes from last frame

    output.filter(THRESHOLD, 0.1); //
  output.filter(BLUR);
  //image(output, 0, 0);  //show the original image
  //image(img1, 0, 0);    //show the processed image
  blobScan();             //calculate blobs and draw
  float[][] myEdges = d.getEdges(); 

  for (int i=0; i<myEdges.length; i++) { //draw the delaunay triangulation
    float startX = myEdges[i][0];
    float startY = myEdges[i][1];
    float endX = myEdges[i][2];
    float endY = myEdges[i][3];

    if (dist(startX, startY, endX, endY)<150) { //only connect points relatively close to each other
      color lineColor= img1.get(int(startX), int(startY)); //line colour is the same as the pixel of the original image it starts at
      stroke(lineColor, lineAlpha);
      line( startX, startY, endX, endY );
    }
  }
  //saveFrame(sketchPath +"/output/####.png");
}

void keyPressed() {
  if (key == 'q' ) { //finish PDF writing if applicable
    endRecord();
    background(bg);
    println("finished writing PDF");
    pdfCount++;
    //exit();
  }
  else if (key=='r' ) {
    // record= true;
    background(bg);
    println("begin writing PDF");
    beginRecord(PDF, "output"+pdfCount+".pdf");
  }
}


void blobScan() {
  bd.imageFindBlobs(output);
  int totalBlobs= bd.getBlobsNumber();
  pts= new float[totalBlobs][2];
  bd.loadBlobsFeatures();
  bd.weightBlobs(false);
  bd.findCentroids(false, false);
  PVector[] aa= bd.getA(); //get bounding box corners
  PVector[] bb= bd.getB();
  PVector[] cc= bd.getC();
  PVector[] dd= bd.getD();

  for (int i=0; i<totalBlobs; i++) {
    float  x1= bd.getCentroidX(i);
    float y1= bd.getCentroidY(i);
    pts[i][0]= x1; //add the centroid of each blob to points array
    pts[i][1]= y1;

    color boxColor= img1.get(int(x1), int(y1)); //get the colour of the pixel at the blob centroid
    stroke(boxColor, boxAlpha);
    noFill();
    beginShape(QUADS);    //draw the bounding box
    vertex(aa[i].x, aa[i].y);
    vertex(bb[i].x, bb[i].y);

    vertex(dd[i].x, dd[i].y);
    vertex(cc[i].x, cc[i].y);
    endShape();
  }

  try {
    d=new Delaunay(pts); //triangulate centre points
  }
  catch(Exception e) {
    println("mesh error"); //occasionally throws an error and I have neither the time nor the inclination to find out why. Not common enough to cause a problem in this instance.
  }
}

void analyse() { //identify pixels which change from one frame to another. adapted from Processing Frame Differencing example by Golan Levin 
  img1= loadImage(filepath + nf(index, 4) +".png");
  img1.loadPixels();

  int movementSum= 0;
  for (int i=0; i<numPixels; i++) {
    color currColor= img1.pixels[i];
    color prevColor= prevFrame[i];

    int currR = (currColor >> 16) & 0xFF; // Like red(), but faster
    int currG = (currColor >> 8) & 0xFF;
    int currB = currColor & 0xFF;
    // Extract red, green, and blue components from previous pixel
    int prevR = (prevColor >> 16) & 0xFF;
    int prevG = (prevColor >> 8) & 0xFF;
    int prevB = prevColor & 0xFF;
    // Compute the difference of the red, green, and blue values
    int diffR = abs(currR - prevR);
    int diffG = abs(currG - prevG);
    int diffB = abs(currB - prevB);
    // Add these differences to the running tally
    movementSum += diffR + diffG + diffB;

    output.pixels[i]= color(diffR, diffG, diffB);
    prevFrame[i]= currColor;
  }
  if (movementSum>0) {
    output.updatePixels();
  }
  index+=interval;
}

Advertisements
This entry was posted in code, College, graded unit 2, processing, Still images, Video and tagged , , , . Bookmark the permalink.

5 Responses to HD Movement Tracking: further and final iteration

  1. HVP says:

    Very nice!!!!

    What tool are you using to extract images from video and after treatments reconstructing video from images?

    HVP

    • Thanks!
      I used Premiere to export the source as images and to stitch the output together. A better way would be to use the Processing video library, at least for importing into the sketch, but I couldn’t get that to run on my desktop for reasons unknown.
      Cheers
      Kyle

      • HVP says:

        I think there is a conflict bug between opengl and video library. It’s fixed on some Processing versions then it’s break again on update….

        HVP

  2. Ale says:

    Very inspiring idea and a really interesting blog.
    Thanks for sharing!
    ^-^

  3. Pingback: HD Movement Tracking: further and final iteration by Kyle Macquarrie | Digital MediaArts Numériques | Scoop.it

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s