HD Movement Tracking: further and final iteration

Well, the end of year show has come and gone, and all that remains is the write up. Here’s a quick run down of the work that I showed and some of the development that went into it. I’ll also show the code I cobbled together from other peoples’ code wrote to do it. If you’ve not seen it already, you might want to take a look at the first and second posts that show the earlier stages. Done? Onwards!

To recap slightly, the first step is to compare two adjacent frames to identify pixels that have changed. The algorithm I used for that was taken directly from the frame differencing example that comes in the Processing examples. Then we need to threshold that, so anything that has changed is white and anything that hasn’t is black. Here’s a single frame with this process applied:
10
Here’s a version (in opposite colours) where movement leaves a trail over several frames:
11
Once we have a nice clean monochrome image we can run BlobScanner which will identify any large blocks of pixels, and calculate their centroids and bounding boxes. The centroid coordinates are fed to the Mesh library which calculates and draws a Delaunay triangulation using them, which gives us a rough outline of the identified movement:
separationVoronoi2_0017
separationVoronoi2_0140

Now, the original plan was to get some big (A1+ size) prints made, so I tried some simple black and white tests. This samples every other frame, IIRC, and the frames fade over time by simply placing a slightly transparent rectangle the size of the screen each frame. It feels a bit fast:


I couldn’t settle on a good way to display lots of frames in one print, so I scrapped the idea of doing just prints and looked at the video again to see what could be improved. Sampling colours from an image is one of my favourite techniques for natural looking colour palettes, so each line samples the colour from the pixel of the original image it starts at. OpenGL additive blending makes it sparkle a bit more, especially where a lot of lines cluster together. Like this:
separationVoronoi4_2697
separationVoronoi4_2684

Here’s the video. I added the music specially for the online version- at the exhibition it ran silently on a loop.

I didn’t give up on print entirely though- I quickly hacked in PDF recording to my sketch and fired some A3 prints out on my home printer. Using cartridge paper gives them a lovely delicate texture. Here are a pair of them rendered from Acrobat as images; I’m slightly baffled about how the Processing PDF renderer deals with colour but they worked out pretty well, and got some very favourable comments from those who attended the show:
output1
output2

So there we are. I’m pretty happy with how this project has worked out: I’ve learned a lot, created something beautiful (IMO at least) and gotten some good feedback about it too. This was the final unit of my college course, and it seems like a fitting end to what has been an excellent couple of years for me.

As a final gift, here’s the Processing code. You’ll need the BlobScanner and Mesh libraries (linked above) to make it run. As an aside, I had trouble getting the Processing video libraries to work on my system; rather than fix the issue I just used an image sequence instead but it shouldn’t be hard to adapt it back to work with video. I’m under no illusions about the quality of my code here- it’s not quick, and a better coder than me could probably find a number of ways to improve it. It did what I needed it to do, and I’m putting it out there in the hope that it will be useful to someone else. I’ve tried to comment it reasonably well, but please get in touch if you have any questions or criticisms. Cheers!

Hmm, WordPress has mangled the bit shifting section despite the source code tags. Have a look at the Processing examples if you need to check that bit, I guess.

import megamu.mesh.*;
import processing.pdf.*;
import Blobscanner.*;
import processing.opengl.*;
import javax.media.opengl.*;

PGraphicsOpenGL pgl;
GL gl;

PImage img1; //input images
PImage output; //processed for blob detection
Detector bd;
Delaunay d;

float boxAlpha=35;
float lineAlpha=55;

color bg= color(255);

boolean fade= false; 
boolean glBlend= false;
int index=1500; //frame number to start at
int pdfCount= 1;
int[] prevFrame;
int numPixels;
int interval= 5; //number of input frames to progress each cycle
float[][] pts;

String filepath= "THE FOLDER WHERE YOUR IMAGES ARE";

void setup() {
  size(1280, 720, OPENGL);
  smooth();
  bd= new Detector(this, 0, 0, width, height, 255);
  numPixels= width*height;
  prevFrame= new int[numPixels];
  output= loadImage(filepath+"0000.png");
  background(bg);
  strokeWeight(0.1);
  noFill();
}

void draw() {
  if (fade) {
    fill(bg, 10);
    rect(0, 0, width, height);
  }
  else {
    //background(bg);
  }
  if (glBlend) {
    //set up additive blending
    pgl= (PGraphicsOpenGL) g;
    gl=pgl.gl;
    pgl.beginGL();
    gl.glDisable(GL.GL_DEPTH_TEST);
    gl.glEnable(GL.GL_BLEND);
    gl.glBlendFunc(GL.GL_SRC_ALPHA, GL.GL_ONE); 
    pgl.endGL();
  }
  println(index);

  analyse(); //identify changes from last frame

    output.filter(THRESHOLD, 0.1); //
  output.filter(BLUR);
  //image(output, 0, 0);  //show the original image
  //image(img1, 0, 0);    //show the processed image
  blobScan();             //calculate blobs and draw
  float[][] myEdges = d.getEdges(); 

  for (int i=0; i<myEdges.length; i++) { //draw the delaunay triangulation
    float startX = myEdges[i][0];
    float startY = myEdges[i][1];
    float endX = myEdges[i][2];
    float endY = myEdges[i][3];

    if (dist(startX, startY, endX, endY)<150) { //only connect points relatively close to each other
      color lineColor= img1.get(int(startX), int(startY)); //line colour is the same as the pixel of the original image it starts at
      stroke(lineColor, lineAlpha);
      line( startX, startY, endX, endY );
    }
  }
  //saveFrame(sketchPath +"/output/####.png");
}

void keyPressed() {
  if (key == 'q' ) { //finish PDF writing if applicable
    endRecord();
    background(bg);
    println("finished writing PDF");
    pdfCount++;
    //exit();
  }
  else if (key=='r' ) {
    // record= true;
    background(bg);
    println("begin writing PDF");
    beginRecord(PDF, "output"+pdfCount+".pdf");
  }
}


void blobScan() {
  bd.imageFindBlobs(output);
  int totalBlobs= bd.getBlobsNumber();
  pts= new float[totalBlobs][2];
  bd.loadBlobsFeatures();
  bd.weightBlobs(false);
  bd.findCentroids(false, false);
  PVector[] aa= bd.getA(); //get bounding box corners
  PVector[] bb= bd.getB();
  PVector[] cc= bd.getC();
  PVector[] dd= bd.getD();

  for (int i=0; i<totalBlobs; i++) {
    float  x1= bd.getCentroidX(i);
    float y1= bd.getCentroidY(i);
    pts[i][0]= x1; //add the centroid of each blob to points array
    pts[i][1]= y1;

    color boxColor= img1.get(int(x1), int(y1)); //get the colour of the pixel at the blob centroid
    stroke(boxColor, boxAlpha);
    noFill();
    beginShape(QUADS);    //draw the bounding box
    vertex(aa[i].x, aa[i].y);
    vertex(bb[i].x, bb[i].y);

    vertex(dd[i].x, dd[i].y);
    vertex(cc[i].x, cc[i].y);
    endShape();
  }

  try {
    d=new Delaunay(pts); //triangulate centre points
  }
  catch(Exception e) {
    println("mesh error"); //occasionally throws an error and I have neither the time nor the inclination to find out why. Not common enough to cause a problem in this instance.
  }
}

void analyse() { //identify pixels which change from one frame to another. adapted from Processing Frame Differencing example by Golan Levin 
  img1= loadImage(filepath + nf(index, 4) +".png");
  img1.loadPixels();

  int movementSum= 0;
  for (int i=0; i<numPixels; i++) {
    color currColor= img1.pixels[i];
    color prevColor= prevFrame[i];

    int currR = (currColor >> 16) & 0xFF; // Like red(), but faster
    int currG = (currColor >> 8) & 0xFF;
    int currB = currColor & 0xFF;
    // Extract red, green, and blue components from previous pixel
    int prevR = (prevColor >> 16) & 0xFF;
    int prevG = (prevColor >> 8) & 0xFF;
    int prevB = prevColor & 0xFF;
    // Compute the difference of the red, green, and blue values
    int diffR = abs(currR - prevR);
    int diffG = abs(currG - prevG);
    int diffB = abs(currB - prevB);
    // Add these differences to the running tally
    movementSum += diffR + diffG + diffB;

    output.pixels[i]= color(diffR, diffG, diffB);
    prevFrame[i]= currColor;
  }
  if (movementSum>0) {
    output.updatePixels();
  }
  index+=interval;
}

Posted in code, College, graded unit 2, processing, Still images, Video | Tagged , , , | 5 Comments

HD Movement tracking: first iteration

separation10_263

I shot some updated footage at the right resolution for my St Enoch project from two different points of view. In retrospect, shooting at 1920×1080 was probably excessive for my needs, and can cause extra problems (e.g. I don’t have a big enough monitor, resizing stuff on the fly in Processing is non-trivial, and it takes longer to process), so the results here are 1280×720. The ultimate goal is to make some large (A1-ish) prints which will probably be from PDFs anyway.


The video shows a mixture of the Processing output and the original footage. When working in Processing this doesn’t play in real time (more like 1-2 fps) and it seems really busy played back at 25 fps. I think I’ll need to find a simpler visual solution for the final results.

Posted in College, graded unit 2, processing, Still images, Video | Tagged , , , | 3 Comments

Another slit-scan image

Went out to grab some better footage for my St Enoch Square project, but thanks to a hilarious(!) mix up with camera resolutions didn’t get quite what I was after. Tried some more experiments with it anyway, since it has a more static background (and is therefore easier to pick out movement against).
slitScan1_719
The weather was pretty changeable, and I like how you can see the changes in the light looking almost like geological strata.

update: I sorted out the aspect ratio of the footage and edited out all the bits where no-one but pigeons passes. Here’s an updated image:
slitScan1_919

Posted in College, processing, research, Still images | Tagged , | Leave a comment

St Enoch square: Slit-scan video experiment

Here’s another approach to isolating movement in video- using slit-scanning. The code for this was a quick adaptation from the Processing slit-scan example with a couple of alterations and a little variation. Without further ado…


Some kinda cool results from a pretty simple technique, I think. It’s certainly much easier this way than doing it analogue!.

Posted in College, processing, Video | Tagged , , , | Leave a comment

Work in progress: Tracking movement in St Enoch Square

As part of the final unit on my course, we’ve been given a general brief to create a piece based on or in St Enoch Square, one of the larger public spaces in the centre of Glasgow. I have decided to focus on the movement of people through the square, and see if I can create some sort of “data-driven” piece using Processing.

Here is a video showing some of the development work I’ve been doing, using some footage from a previous project.


There are a number of iterative steps going on here…
1: The original, untreated footage.
2 and 3: Using a basic frame differencing technique based on the Processing example by Golan Levin to identify movement between frames. There is a threshold filter so that any movement is one colour and anything still is another colour, with nothing in between. This makes it easy to go to…
4: Using BlobScanner to identify “blobs”, i.e. large areas of continuous pixels. Applied to the previous video it should, in theory, pick out the moving sections, mark the centre of each and draw a bounding box around it. As you can see, with varying degrees of success!
5: Draws a line between blob centres when they’re within a certain distance of each other.
6: Shows the bounding box of each blob and the links between them, on top of the original unprocessed footage.
7: Remove the footage and let the marks remain in place for a while and fade. This is getting closer to the how I originally envisaged this, while what went before was more about getting it to work in the first place.

The music is a tune I made about five years ago and forgot about.

The continuing plan for this is to get some better footage (both in terms of resolution and position) before settling on what the best output result is likely to be- I quite fancy making some really big prints with the abstract stuff on it. More to come soon- this project has to be finished within about ten days. If you have any questions or comments let me know!

Posted in College, processing, Video | 6 Comments

Shiny: Additive blending with OpenGL in Processing

This sketch was inspired by a combination of things: the particle systems chapter draft from Dan Shiffman’s forthcoming Nature Of Code book influenced the additive blending aesthetic, while I got the idea of a three dimensional “colour space” from this talk from Mario Klingemann.

All that’s really going on here is the RGB/HSB values of each pixel of an image are mapped to XYZ coordinates, while the camera rotates round the centre point. Changing the mode from RGB to HSB creates a different shape from the same collection of pixels, while the low opacity and OpenGL blending create a nice glowing effect. It’s interesting to see the connections between shades in an image- almost always a continuous spectrum without large gaps.

This runs a bit slowly, just because of the number of pixels having to be drawn each frame. I’d like to try it with a film and see whether the character of the movement on screen comes through…

In the spirit of sharing, here’s the code- pretty straightforward, mostly…

/*
3d Picture Particles with additive blending
 Kyle Macquarrie: velvetkevorkian.wordpress.com
 */

import processing.opengl.*;
import javax.media.opengl.*; //extra import needed for additive blending
import peasy.*;

PGraphicsOpenGL pgl;
GL gl;

PImage img;
PeasyCam cam;
float[][] results;
boolean rgb=true;
boolean updateBackground= true;
boolean record= false;

void setup() {
  size(1280, 720, OPENGL);
  background(0);
  cam= new PeasyCam(this, 250);
  img= loadImage("vg.jpg");
  results= new float[img.pixels.length][3];
  analyse();
}

void draw() {
  //set up the OpenGL blending
  PGraphicsOpenGL pgl = (PGraphicsOpenGL) g;  // g may change
  GL gl = pgl.beginGL();  // always use the GL object returned by beginGL
  gl.glEnable(GL.GL_BLEND);
  gl.glBlendFunc(GL.GL_SRC_ALPHA, GL.GL_ONE); //additive blending
  pgl.endGL();

  if (rgb) {
    colorMode(RGB, 255);
  }
  else {
    colorMode(HSB, 255);
  }
  if (updateBackground) {
    background(0);
  }
  cam.rotateY(radians(1));
  pushMatrix();
  translate(-128, -128, -128);


  /*
  //draw a bounding box
   pushMatrix();
   translate(128, 128, 128);
   stroke(200, 100);
   noFill();
   box(256);
   popMatrix();
   */

  for (int i=0; i<img.pixels.length; i++) {
    if (updateBackground) { //higher alpha if canvas being cleared
      stroke(results[i][0], results[i][1], results[i][2], 175);
    }
    else { //low alpha for a nice fuzzy blend
      stroke(results[i][0], results[i][1], results[i][2], 15);
    }
    point(results[i][0], results[i][1], results[i][2]);
  }
  popMatrix();
  if (record) {
    saveFrame(frameCount+".png");
  }
}

void analyse() {
  img.loadPixels();
  for (int i=0; i<img.pixels.length; i++) {
    float a, b, c;
    if (rgb) {
      a= red(img.pixels[i]);
      b= green(img.pixels[i]);
      c= blue(img.pixels[i]);
    }
    else {
      a= hue(img.pixels[i]);
      b= saturation(img.pixels[i]);
      c= brightness(img.pixels[i]);
    }   
    results[i][0]= a;
    results[i][1]=b;
    results[i][2]= c;
  }
}

void keyPressed() {
  if (key=='c') { //toggle between RGB and HSB analysis
    rgb=!rgb;
    analyse();
  }
  else if (key=='b') { //toggle background clearing
    updateBackground= !updateBackground;
    background(0);
  }
  else if (key=='r') {
    record=!record; //record a sequence of frames
  }
  else if (key=='s') {
    saveFrame(frameCount+".png"); //capture a single frame
  }
}

That’s all for now, but I have some more adventures in additive blending particles systems to show soon.
Have fun!

Posted in code, processing, Video | Tagged , , | 1 Comment

Research: Portfolio sites

As part of the Freelancing/Professional Skills units at college we have to design a digital portfolio to promote one’s self. Here are some examples of sites that I like in this vein…

First up, brendandawes.com, the online home of Brendan Dawes. Brendan has been known as an early adopter who pushed a lot of boundaries using Flash, and has since expanded into general interaction design. When I first came across his work, it was on the previous incarnation of his website- a Flash heavy design that was more interesting to play with than useful for finding anything specific. This newer version is almost the opposite.

Using Indexhibit, the layout is super simple but totally intuitive. It is very easy to locate a specific project, but also satisfying to browse around. There is no compulsion to over-categorise, and the personal touches (the coloured headings, and the background design on the homepage for example) are subtle enough to perfectly offset the simplicity of the general layout.

Next up is Michael Hansmeyer’s site, showcasing his work in generative architecture. There are a couple of nice touches here; the switch to change the colour scheme is neat, and the layout is crisp and modern. The categorisation of projects is OK, although the use of a central panel for content is less than ideal for text, in my opinion. It simply does not make best use of the available space. The images, however, are stunningly presented and more than make up for any other issues.

The last site I’m going to mention for now is the website of the (I think) Russian photographer Elena Savina. This was designed by Three Hundred Eighty Ten who I wrote about previously, and it shares the same… unconventional… approach to web design. This is really easier to use than it is to explain, but essentially scrolling one of the four panels scrolls the other three in different directions. A nice touch is the lightbox style photo galleries which also use this sliding idea. Quite bonkers, but unforgettable.

Posted in College, Freelancing Skills unit, research | Leave a comment

Sunflow and Processing: the basics

Sunflow is an open source ray tracing renderer which can produce some astonishing results in the right hands. Someone far cleverer than me wrote a Java wrapper for it (the catchily titled SunflowAPIAPI), and another did a tutorial about getting it talking nicely to Processing, which I relied on heavily in getting this working. There is also a Processing library by the same author (the even catchier P5SunflowAPIAPI) but thus far I’ve not been able to get it to do what I want.

Amnon’s post goes into a bit of detail about getting Sunflow APIAPI reading complex geometry from Processing using ToxicLibs- this was my first time using ToxicLibs but it was relatively straightforward. I wrote a simple class to generate some semi-random geometry using ToxicLibs’ TriangleMesh and a couple of lines of code in that prepare it to be passed to Sunflow. In the main sketch I put all the Sunflow calls (setting up the lights, shaders, camera, etc.) in one function which can be triggered by a keypress. This means the sketch is mostly the same as it would be without Sunflow, and can use the OpenGL renderer to view the scene before raytracing- the sketch and the rendering are almost totally separated. I’m not sure if that is possible with the P5SunflowAPIAPI library, or with more complex geometry.

So, to my results…
These images use a white point light int the centre of the image and a square mesh light way up high. Rendering time was approximately half an hour for each image- at full size they’re 2100 x 2100 pixels.
SunflowTestRender2
This one uses a diffuse shader with a constant grey colour.

SunflowTestRender3
This is either a diffuse or shiny diffuse shader, with a red value changing from 0-255, ring by ring.

SunflowTestRender4
Finally, this one uses the glass shader, again with the red value ranging from 0-255.

The only real issue I’ve found so far, which Amnon alluded to in his post, is that the camera behaves differently in Sunflow and Processing. I’ve got the settings pretty close, but everything I render in Sunflow comes out flipped 180 degrees horizontally compared to its position in the Processing window. I have no idea why at this point, and any ideas as to how to correct this would be appreciated!

Overall, this is a great example of how open source tools can really work. The freedom for people to build on each others’ work and the willingness to share experience and expertise is really inspiring.

More to come on this soon, I expect.

edit: I’ve uploaded a zipped version of this sketch, updated slightly for Processing 1.5. See the comments for links.

Posted in code, processing, Still images | Tagged , , , | 7 Comments

New Processing Sketches: A Video

Hello, and a somewhat belated happy new year! I hope 2011 has been good to you so far. I’ve been pretty busy both with official college work and personal projects, and it’s the latter I want to show today. I put together a wee compilation of some of the sketches I’ve put together recently as a “showreel” of sorts (with one eye on interviewing for university in the immediate future). Some of these aren’t really suitable for web deployment, and doing it as video lets me crank up the detail and quality. It also gives me the opportunity to make some metal to go behind it.


You can play with some of the live sketches on OpenProcessing: Mesh Typography, 3D Mesh Typography, and Kinetic Typography. I might talk about some of the sketches in a bit more detail in future, in the meantime please get in touch if you have any questions or comments.

Credit where credit’s due: as well as Processing, these pieces use the Geomerative, Traer Physics and Mesh libraries. I was inspired to make some music for it by the amazing Cloudkicker, so thanks to Jaime (@speedyjx) for flagging him up to me.

Until next time…

Posted in code, processing, Video | Tagged , , | Leave a comment

Processing and The Guardian API- now with actual information

2010- a year in Wikileaks

Here’s a quick snapshot of how this is developing. This searches the Guardian’s Open Platform API for mentions of everyone’s favourite whistleblowing website. The bars map the number of articles on a monthly basis, where 12 o’clock is January. You can see a small peak in April, when the Collateral Murder video was released, bigger peaks in July and October as the Afghanistan and Iraq logs are published and a massive spike in December as “Cablegate” (oh, how I loathe the use of ‘gate’ as a suffix for anything mildly controversial!) gets going. The article headlines are arranged by date order, but on a uniform scale. This is still work in progress, but I’m quite pleased with how it’s shaping up so far.

Get in touch with any comments, criticisms or questions!

Posted in code, processing | Tagged , , | 1 Comment