Processing and the Guardian: Now 73% more object oriented, 300% more colourful

Hello! Just a quick update: following on from my last post I’ve refined the code a bit, letting me run multiple searches from one sketch. Here’s the same three searches from last time, compiled into one image. In this case, Tony is green, Gordon is red and Dave is blue.
dearLeaders
Code will be forthcoming once I’ve refined it a bit more. AdiĆ³s!

Advertisements
Posted in code, processing, Still images | Tagged , , , | Leave a comment

Processing and the Guardian API

Inspired by this article from the awesome Jer “Blprnt” Thorp, I’ve been experimenting with the Guardian’s Open Platform API, which gives access to ten years worth of articles in XML or JSON format. You have to sign up for an API key but it’s free and easy. I thought I’d put up some of the early tests I’ve been doing with it. I’ve never worked with XML before so it’s been something of a learning experience!

These three pieces use the same code, just different search terms. I’ve searched for the names of our most recent “dear leaders” in the news section, and, er, put the headlines in a circle. That’s about as far as I’ve got at the moment…
David Cameron:
david+cameron

Gordon Brown:
gordon+brown

Tony Blair:
tony+blair

Not a lot of information we can glean from that, I think you’ll agree, other than there are significantly more articles mentioning Blair (unsurprising as he was PM for much longer than the other two, so far at least), but I think there’s some potential for some interesting stuff. It’s also worthwhile to note that Processing’s functionality has improved since Jer wrote his article, which I think makes it easier to get started with XML without having to dig into external libraries or obscure Java coding.

Here’s the Processing code I wrote- you’ll need to get your own API key to run it.

int page=1; //page number being called
int pages; //total pages available
int total; //total entries available
PFont f;

//set search parameters
String api= "YOUR API KEY HERE";
String url="http://content.guardianapis.com/search?";
String search= "carrot"; //will find any articles containing these terms. separate words with + 
String section = "news"; //section of the website to search (news, comment, sport etc)

XMLElement response;
String[] results;
XMLElement[] responseChild;

String constructQuery() {     //construct the search terms 
  String q= url + "q=" + search + "&section=" + section+"&page=" + page 
    + "&page-size=10&order-by=newest" + "&format=xml&api-key=" + api;
  return q;
}

void setup() {
  size(1000, 1000);
  smooth();
  noLoop();
  f= loadFont("ASCII-12.vlw");
  textFont(f, 12);


  String query= constructQuery();
  response= new XMLElement(this, query);
  pages=response.getIntAttribute("pages"); //send initial query to establish how many pages of XML we'll need to go through to get all results
  total=response.getIntAttribute("total");// and how many results there are in total

  results= new String[total];       //initialise string array to hold the results(there are 10 items per page by default)

  for(int i=0; i<pages; i++) {
    page=i+1;  
    query= constructQuery();    
    response= new XMLElement(this, query);  //run the query for each page
    responseChild= response.getChildren("results/content");             //get each XML story item 

    for(int j=0; j<responseChild.length; j++) {
      results[i*10+j]= responseChild[j].getStringAttribute("web-title"); //extract the relevant attribute as a string
    }
  }
}



void draw() {
  background(255);
  fill(0);
  translate(width/2, height/2); //move to the centre of the screen
  for(int i=0; i<results.length; i++) {
    String headline= results[i];  //run through each result and print to the canvas. 

    float angle=(TWO_PI/total)*i; 
    float xpos= cos(angle)*75;
    float ypos= sin(angle)*75;
    pushMatrix();
    translate(xpos, ypos);
    rotate(angle);
    text(i+" "+headline, 0, 0);
    popMatrix();
  }
  saveFrame(search+".tif");
}

I hope that is of use or interest to some of you. More to come on this as I dig into it a bit more.

Posted in code, processing | Tagged , , | Leave a comment

Three Flash Pieces

OK, now it’s time for the final instalment of the Nine Words saga that has been ongoing for a while. This time, the brief was to create three interactive pieces using Flash, triggered by words chosen from the nine. My AS3 programming is not very advanced so I’ve not been able to get as conceptual as I did with the Processing pieces, but so it goes. All three rely to varying degrees on the rather nice Hype Framework, which simplifies some aspects of AS3 to let you get going a bit more easily. Click on the pictures to play with the pieces.

1: Loop
Loop

Move the mouse to control how the balls move. I was quite pleased at how simple the “physics” (such as they are) were to implement. Think of this as an analogy for life if you like…

2: Diaphanous
diaphanous

Again, move the mouse and see how the shapes react. This is an extension of one of the Hype Framework tutorials, but the springiness in the way the balls react to mouse movement is very engaging to me.

3: Ambiguity
ambiguity
This is ambiguous on a number of levels: what do the words say? What do they reveal? How do the different parts react to the movements of the mouse? I’ve applied the movement principles from the previous two pieces to the different parts here which creates an intriguing and sometimes frustrating interface. This piece needs more on the stage than the other two which can be entirely scripted, thanks to the masking used.

What’s that? You want some code? OK then… First up is Loop:

import hype.framework.display.BitmapCanvas;
import hype.extended.rhythm.FilterRhythm;
import hype.framework.core.TimeType;

var container:Sprite = new Sprite();
var lastX:Number=0;
var lastY:Number=0;
var speedX:Number=0;
var speedY:Number=0;

var ball1:MovieClip = new Ball();
var ball2:MovieClip = new Ball();
ball1.x=stage.stageWidth/2;
ball1.y=stage.stageHeight/2;
ball2.x=stage.stageWidth/2;
ball2.y=stage.stageHeight/2;

container.addChild(ball1);
container.addChild(ball2);

addChild(container);

stage.addEventListener(MouseEvent.MOUSE_MOVE, ballMove);
ball1.addEventListener(Event.ENTER_FRAME, updateX);
ball2.addEventListener(Event.ENTER_FRAME, updateY);


function ballMove(event: MouseEvent):void {
	speedX+=((lastX-mouseX)/3);
	speedY+=((lastY-mouseY)/3);
}

function updateX(event:Event):void {

	ball1.x+=speedX;

	speedX*=0.95;

	if (ball1.x&gt;stage.stageWidth||ball1.xstage.stageHeight||ball2.y&lt;0) {
		speedY*=-1;

	}
	lastY=mouseY;
}

var canvas:BitmapCanvas=new BitmapCanvas(800,600);
canvas.startCapture(container, true);

addChild(canvas);

var blur:BlurFilter=new BlurFilter(8,8,3);
var blurRhythm:FilterRhythm=new FilterRhythm([blur], canvas.bitmap.bitmapData);
blurRhythm.start(TimeType.ENTER_FRAME, 1);

Next is Diaphanous:

import hype.extended.behavior.MouseFollowSpring;
import hype.framework.display.BitmapCanvas;
import hype.extended.rhythm.FilterRhythm;
import hype.framework.core.TimeType;

var container:Sprite = new Sprite();

var circle:MovieClip = new Circle();
var circle2:MovieClip = new Circle2();
var circle3:MovieClip= new Circle3();

container.addChild(circle);
container.addChild(circle2);
container.addChild(circle3);

addChild(container);


var b:MouseFollowSpring=new MouseFollowSpring(circle,0.9,0.2);
var c:MouseFollowSpring=new MouseFollowSpring(circle2,0.8,0.1);
var d:MouseFollowSpring=new MouseFollowSpring(circle3,0.9,0.05);


b.start();
c.start();
d.start();

var canvas:BitmapCanvas=new BitmapCanvas(800, 600);
canvas.startCapture(container, true);

addChild(canvas);

var blur:BlurFilter=new BlurFilter(10,10,3);
var blurRhythm:FilterRhythm=new FilterRhythm([blur], canvas.bitmap.bitmapData);
blurRhythm.start(TimeType.ENTER_FRAME, 3);

And finally, ambiguity…

import hype.extended.behavior.MouseFollowSpring;

var container:Sprite = new Sprite();

var speedX:Number=0;
var speedY:Number=0;
var lastX:Number=0;
var lastY:Number=0;

stage.addEventListener(MouseEvent.MOUSE_MOVE, ballMove);
backg.addEventListener(Event.ENTER_FRAME, updateX);
backg.addEventListener(Event.ENTER_FRAME, updateY);

var c:MouseFollowSpring=new MouseFollowSpring(ambig.innerOne, 0.8, 0.05);
var d:MouseFollowSpring=new MouseFollowSpring(ambig.innerTwo, 0.85 ,0.15);
var e:MouseFollowSpring=new MouseFollowSpring(ambig.innerThree, 0.9 ,0.25);

c.start();
d.start();
e.start();

function ballMove(event: MouseEvent):void {
	speedX+=((lastX-mouseX)/4);
	speedY+=((lastY-mouseY)/4);
}

function updateX(event:Event):void {

	backg.x+=speedX;
	speedX*=0.95;

	if (backg.x&gt;=stage.stageWidth+ (backg.width/2)) {
		backg.x=-(backg.width/2);
	} else if (backg.x=stage.stageHeight+ (backg.height/2)) {
		backg.y=-(backg.height/2);
	} else if (backg.y&lt;=-backg.height/2) {
		backg.y=stage.stageHeight+(backg.height/2);
	}
	lastY=mouseY;
	
}

Man, that’s some ugly coding. I plan on getting my head into a bit more AS3 so maybe I’ll revisit these and rework them a bit.

Maybe…

Posted in Advanced Software unit, code, College | Tagged , | 1 Comment

Conceptual sound- another early sketch


I wanted to do something with proper found sound, so I got all the whirring bits of tech I could find and pointed a mic at them. As far as I recall, there are three cameras, a printer and my laptop’s DVD drive whirring away. I also wanted to try something with some dynamics to it, which was moderately successful. I quite like the driving rhythm, anyway.

Posted in College, Conceptual Sound unit | Tagged | Leave a comment

Nine words, nine Processing sketches

Following on from my nine images and one video, the next part of the brief was to use Processing to create a response to the same nine words. I’ve included the code for each, as per the brief, although WordPress unfortunately mangles Processing’s nice auto formatting. It also makes this post very long, but you can skip through all the code sections if you’re so inclined. Without further ado…

1: Serendipity
serendipity
I mentioned this in a previous post, but here it is again for good measure. It’s a visualisation of the numbers drawn over fifty draws of the National Lottery. Here’s the Processing code for it, although it won’t work without the data file.

String[] lines;
String[] numbers1;
int[] numbers;
float xpos;
float ypos;

void setup() {
size(1000, 1000);
lines = loadStrings("numbers.txt");
background(255);
smooth();
}

void draw() {
background(255);

for(int j=0; j<lines.length; j++) { //run through every line of data
numbers1= splitTokens(lines[j]); //separate each line into its component strings
numbers= int(numbers1); //convert string[] to int[]
float distance= map(j, 0, lines.length, 20, width/2);
beginShape();
for (int i=0; i<numbers.length; i++) { //run through each individual data point
if(j%3 == 0) {
fill(255, 0, 0, 40);
}
else if(j%3 ==1) {
fill(0, 255, 0, 40);
}
else if(j%3 ==2) {
fill(0, 0, 255, 40);
}
float angle= map(numbers[i], 1, 49, 0, TWO_PI);
float x= (width/2)+sin(angle)*distance;
float y= (height/2)+cos(angle)*distance;

xpos= map(numbers[i], 1, 49, 50, width-50);
noStroke();
ellipse(x, y, 20, 20);
noFill();
if(j%3 == 0) {
stroke(255, 0, 0, 70);
}
else if(j%3 ==1) {
stroke(0, 255, 0, 70);
}
else if(j%3 ==2) {
stroke(0, 0, 255, 70);
}

vertex(x, y);
}
endShape();
}
}

void keyPressed() {
if(key=='s') {
saveFrame("lotteryViz4.tif");
}
}

2: Sequential
This sketch divides the letters from a given word or phrase and rearranges them in 3d space, joined by bezier curves. This uses the PeasyCam library for camera control.
sequential

import peasy.*;

PeasyCam cam;

PFont f;

float[] ptA;
float[] ptB;
float[] ptC;

float noiseSourceA= 0;
float noiseSourceB= 0;
float noiseSourceC= 0;

float mod= 0.5;
int iterations=1;

String message= "Sequential?";
int maxIterations= message.length();

void setup() {
size(1000, 1000, P3D);
smooth();
cam = new PeasyCam(this, 600);
f= loadFont("Calibri-72.vlw");
textFont(f, 25);

ptA= new float[0];
ptB= new float[0];
ptC= new float[0];
}

void draw() {
background(255);

if(iterations<=maxIterations) {
float a= calc(noiseSourceA);
float b= calc(noiseSourceB);
float c= calc(noiseSourceC);

ptA= append(ptA, a);
ptB= append(ptB, b);
ptC= append(ptC, c);

noiseSourceA+= random(mod);
noiseSourceB+= random(mod);
noiseSourceC+= random(mod);
}
fill(210, 17, 12, 180);
for (int i=0; i<ptA.length; i++) {
text(message.charAt(i%message.length()), ptA[i], ptB[i], ptC[i]);
println(ptA.length);
}

noFill();
stroke(0, 50);
beginShape();

for(int i=0; i<ptA.length; i++) {
for (int j=0; j<ptA.length; j++) {
vertex(ptA[i], ptB[i], ptC[i]);
vertex(ptA[j], ptB[j], ptC[j]);
bezierVertex(ptB[i], ptC[i], ptA[i], ptC[i], ptA[i], ptB[i], ptA[i], ptB[i], ptC[i]);
bezierVertex(ptB[j], ptC[j], ptA[j], ptC[j], ptA[j], ptB[j], ptA[j], ptB[j], ptC[j]);
}
}
endShape();

if (iterations<= maxIterations) {
iterations++;
}
}

float calc(float source) {
float noiseResult= noise(source);
float noiseMap= map(noiseResult, 0, 1, -width/2, width/2);
return noiseMap;
}

void keyPressed() {
if(key==' ') {
iterations=1;
ptA= new float[0];
ptB= new float[0];
ptC= new float[0];
}

if(key=='s') {
saveFrame();
}
}

3: Loop
This uses a several looping sections of code to create a repeating pattern controlled by trigonometry. Changing the values of x and y in the code produces radically different designs. I spent a morning at the Glasgow School of Art this week, then came home and developed this which, completely coincidentally, bears a certain resemblance to Charles Rennie Mackintosh’s geometric roses. Art imitating life?
loop

float x;
float y;

void setup() {
size(1000, 1000);
background(255);
smooth();
vars();
}

void vars() {
x= width/2;
y=width/2;
}

void draw() {
println(x);
noLoop();
//background(255);
//noFill();
stroke(25, 50);
for(int a=0; a<700; a++) {
for(int i=0; i<10; i++) {
fill(i*25, 0, 50, 25);

float xx=map(sin(x), -1, 1, 0, width);
float yy= map(cos(y), -1, 1, 0, height);

ellipse(xx, yy, (tan(i)*x%60), (tan(i)*y%50));
}
x+=9;
y+=7;
}
// saveFrame();
}

4: Crash
I’ve been trying to use less random or pseudo-random (e.g. Perlin noise) functions in my coding but this seemed like the right place to use some randomness. This takes an image (a screenshot from one of my earlier posts) and chops random pieces out and rearranges them over the screen, collage style. I like how there is still recognisable chunks of the interface present despite the mangling.
crash

PImage img;

void setup() {
size(1000, 1000);
img= loadImage("VerdantScapeScreen.jpg");
noLoop();
}
int x=0;
int y=0;

void draw() {
background(0);
for(int a=0; awidth) {
x=0;
y+=int(random(50));
}
}
//saveFrame("###");
}

5: Ambiguity
This sketch mashes up two images from my lottery visualisations post by taking alternate pixels from each image and combining them. This process can bring some pretty mad results depending on what pictures you use as sources. I rather like this one as it combines two pictures with definite meanings into one with no discernible meaning!
ambiguity
int x=0;
int y=0;
int a=0;
int b=0;

void setup() {
size(1000, 1000, P2D);
background(255);
}

void draw() {
PImage img1= loadImage("lotteryViz2.jpg");
PImage img2= loadImage("lotteryviz4.jpg");

for(int i=0; iimg1.width) {
x=0;
y++;
}

if(a>width) {
a=0;
b++;
}
}
}

void keyPressed() {
if (key=='s') {
saveFrame("###.tif");
println("saved");
}
}

6: Condition
This uses the very cool Geomerative library which lets you break down shapes (including fonts and vector graphics) into points. This is actually a still from an animated version which seems to summarise the sketchy pencil feel you can achieve. If you run the code, you can switch the drawing style using the space bar.
condition

import geomerative.*;

RFont f;
RShape grp;
RShape grp2;
RPoint[] points;
RPoint[] points2;

boolean up= false;
int polyLength= 0;
boolean clear= false;

void setup() {
size(1000, 600, P2D);
frameRate(20);
background(255);
RG.init(this);

grp = RG.getText("Condition:", "GenBasB.ttf", 150, CENTER);
grp2= RG.getText("sketchy", "GenBasB.ttf", 150, CENTER);

smooth();
}

void draw() {

if(clear==true) {
background(255);
stroke(50, 150);
}
else {
stroke(50, 10);
}

translate(width/2, height/2);

RG.setPolygonizerLength(polyLength);

points= grp.getPoints();
points2= grp2.getPoints();

for(int i=4; i<points.length-1; i++) {
for(int j=0; j<5; j++) {
line( points[i-j].x, points[i-j].y, points[i].x, points[i].y);
}
}

translate(0, 200);

for(int i=4; i<points2.length-1; i++) {
for(int j=0; j<5; j++) {
line( points2[i-j].x, points2[i-j].y, points2[i].x, points2[i].y);
}
}

if(up==false) {
polyLength--;
}
else {
polyLength++;
}

if(polyLength255) {
up=!up;
background(255);
}
}

void keyPressed() {
if(key==' ') {
clear=!clear;
}
if(key=='s') {
saveFrame("###");
}
}

7: Diaphanous
This one is pretty similar to some of my previous Perlin noise experiments, but it uses bezier curves which give it a lovely organic feel. It uses the OpenGL renderer as its painfully slow otherwise.
diaphanous

import processing.opengl.*;

ArrayList beziers;

color red1= color(237, 12, 12);
color blue1= color(71, 2, 201);

float con1xSource=random(100);
float con1ySource=random(100);
float con2xSource=random(100);
float con2ySource=random(100);

float point1xSource=random(100);
float point1ySource=random(100);
float point2xSource=random(100);
float point2ySource=random(100);

float transStroke= 30;
float transFill= 4;

float mod= 0.007;

void setup() {
size(1000, 1000, OPENGL);
smooth();
background(255);
beziers= new ArrayList();
}

void draw() {

background(255);

for(int i= 1; i<beziers.size(); i++) {
FadingBezier theBeziers= (FadingBezier) beziers.get(i);
theBeziers.drawCurve();
theBeziers.update();
if(theBeziers.strokeTrans<0) {
beziers.remove(i);
}
}

println(beziers.size());

float con1x= calc(con1xSource);
float con1y= calc(con1ySource);
float con2x= calc(con2xSource);
float con2y= calc(con2ySource);

float point1x= calc(point1xSource);
float point1y= calc(point1ySource);
float point2x= calc(point2xSource);
float point2y= calc(point2ySource);

beziers.add(new FadingBezier(0, point2y, con1x, con1y, con2x, con2y, point1x, point1y, transStroke, transFill, red1, blue1));
beziers.add(new FadingBezier(width, point2x, con1x, con1y, con2x, con2y, point1x, point1y, transStroke, transFill, red1, blue1));
beziers.add(new FadingBezier(point2x, 0, con1x, con1y, con2x, con2y, point1x, point1y, transStroke, transFill, blue1, red1));
beziers.add(new FadingBezier(point2y, height, con1x, con1y, con2x, con2y, point1x, point1y, transStroke, transFill, blue1, red1));

con1xSource+=random(mod);
con1ySource+=random(mod);
con2xSource+=random(mod);
con2ySource+=random(mod);

point1xSource+=random(mod);
point1ySource+=random(mod);
point2xSource+=random(mod);
point2ySource+=random(mod);
}

float calc(float source) {
float noiseResult= noise(source);
float noiseMap= map(noiseResult, 0, 1, 0, width);
return noiseMap;
}

void mousePressed() {
beziers= new ArrayList(0);
}

void keyPressed(){
if(key=='s'){
saveFrame("beziers###.tif");
}
}

class FadingBezier {
float x1, y1, c1x, c1y, c2x, c2y, x2, y2, strokeTrans, fillTrans;
color strokeCol, fillCol;

FadingBezier(float X1, float Y1, float C1X, float C1Y, float C2X, float C2Y, float X2, float Y2,
float STROKETRANS, float FILLTRANS, color STROKECOL, color FILLCOL) {
x1= X1;
y1= Y1;
c1x= C1X;
c1y= C1Y;
c2x= C2X;
c2y= C2Y;
x2= X2;
y2= Y2;
strokeTrans= STROKETRANS;
fillTrans= FILLTRANS;
strokeCol= STROKECOL;
fillCol= FILLCOL;
}

void drawCurve() {
stroke (strokeCol, strokeTrans);
fill(fillCol, fillTrans);
bezier(x1, y1, c1x, c1y, c2x, c2y, x2, y2);
}

void update() {
strokeTrans-= 0.1;
fillTrans-= 0.09;
}
}

8: Utopia
OK, time to push the boat out slightly. Number eight is a rendering of an IDICwhich is a symbol of the logical foundations of Vulcan philosophy from Star Trek. It also inspired the sub-title for this blog: Infinite Diversity in Infinite Combination. For me this is the ideal way of approaching life and art, especially when it comes to coding and generative art, although I should stress that I liked logic before I liked Star Trek! Geek out. I looked at this as an exercise in making sure this design scales to fit the window- as long as its square, it should display properly regardless of size.
utopia

void setup() {
size(1000, 1000);
smooth();
background(255);
}

void draw() {
translate(-width/6, -width/6);
background(255);
noFill();
for(int i=0; i<50; i++) {
stroke(0, 255-(i*5));

ellipse(width*0.6, width*0.6, (width*0.55)-i, (width*0.55)-i);
ellipse(width/2, width/2, (width/5)-i, (width/5)-i);
ellipse(width/2, width/2, (width/30)-i/2, (width/30)-i/2);

triangle(width/2+i, width/2+i, width-i, width*0.7, width*0.7, width-i);
}
}

void keyPressed() {
if(key=='s') {
saveFrame("###");
}
}

9: Ephemeral
Finally for now, another bit of sci-fi homage. Inspired by this fantastic scene from the end of Blade Runner, I wanted to try and capture the spirit of Rutger Hauer’s monologue. That scene is mainly defined by his phrasing, which is so difficult to capture in text, but I tried to capture other elements: the dove escaping as he dies, the idea of burning out rather than fading away. I had a much busier version which I actually uploaded (its on my Flickr if you want to look for it) but this one works better I think. The code is pretty lengthy/clumsy as I’ve not yet found a way to process the three arrays of points together. Again, if you want to run the code you’ll need to get the Geomerative library.
ephemeral2

import geomerative.*;

RShape grp0;
RShape grp1;
RShape grp2;
RPoint[] points0;
RPoint[] points1;
RPoint[] points2;

String[] cbeams;
PFont f;

void setup() {
size(1000, 1000);
cbeams= loadStrings("cBeams.txt");
RG.init(this);

f= loadFont("Calibri-48.vlw");
grp0= RG.getText(cbeams[0], "calibri.ttf", 15, CENTER);
grp1= RG.getText(cbeams[1], "calibri.ttf", 15, CENTER);
grp2= RG.getText(cbeams[2], "calibri.ttf", 15, CENTER);

RG.setPolygonizerLength(2);
points0= grp0.getPoints();
RG.setPolygonizerLength(3);
points1= grp1.getPoints();
RG.setPolygonizerLength(1);
points2= grp2.getPoints();

textFont(f, 20);
textAlign(CENTER);
background(0);
smooth();
println(points2.length);
strokeWeight(0.7);
}

void draw() {
noLoop();
background(0);

noStroke();
fill(255, 150);
pushMatrix();
translate(width*0.5, height*0.3);

grp0.draw();
for(int i=0; i<points0.length; i++) {
stroke(255, 10);
float offset=map(i, 0, points0.length, -250, 250);

if(points0[i].x0) {
line(points0[i].x, points0[i].y, width/2, points0[i].y+offset);
line(points0[i].x, points0[i].y, width/2, points0[i].y-offset);
}
}

translate(0, height*0.25);

for(int i=0; i<points1.length; i++) {
stroke(255, 10);
float offset=map(i, 0, points1.length, 250, -250);

if(points1[i].x0) {
line(points1[i].x, points1[i].y, width/2, points1[i].y+offset);
line(points1[i].x, points1[i].y, width/2, points1[i].y-offset);
}
}

noStroke();
fill(255, 150);
grp1.draw();

translate(0, height*0.25);

for(int i=0; i<points2.length; i++) {
stroke(255, 10);
float offset=map(i, 0, points2.length, 200, -200);

if(points2[i].x0) {
line(points2[i].x, points2[i].y, width/2, points2[i].y+offset);
line(points2[i].x, points2[i].y, width/2, points2[i].y-offset);
line(points2[i].x+60, points2[i].y, width/2, points2[i].y+offset);
line(points2[i].x+60, points2[i].y, width/2, points2[i].y-offset);
}
}

noStroke();
fill(255, 150);
grp2.draw();
popMatrix();

//saveFrame("###");
}

So, there we are. Please let me know what you think of these pieces, or if you have any questions or suggestions. Until next time, live long and prosper!

Posted in Advanced Software unit, code, College, processing, Still images | Tagged , | Leave a comment

Processing sketches: lottery number visualisations

Following on from my previous posts about the nine words we’re using as inspiration, I thought I’d show some of the ideas I’ve been playing with in the third phase of the project, which is using Processing to create images. The images here demonstrate some of the things I really like about Processing, like the way that an idea can be reworked quickly and easily to create images which can be interesting visually and conceptually.

All the images in this post are linked with the single word ‘serendipity’. Inspired by the works of people like Ben Fry and Jer Thorp, who work on large scale data visualisations, I decided to plug in some results from the UK’s National Lottery and see what I could come up with. All the results were taken from this archive.

This was the initial sketch, which takes four draws (of seven numbers each) and represents each draw with a differently sized circle. The numbers range from 1 to 49, left to right:
lotteryViz1

Here’s the second version, which is the same idea stretched over the vertical axis and using 50 draws, instead of just four:
lotteryViz2
Next, using the same 50 draws, I spread the results out a bit. Here, the numbers 1 to 49 are mapped on a circle; each draw is placed on a separate circle radiating outwards from the centre. Each dot is a number and is connected to the other dots from that draw. I added some basic colour to make it easier to distinguish the different draws. This is the version I’ll probably use as part of the final submission for this project.
lotteryviz4
I decided to focus on the shapes the linkages between numbers in each draw form. Here they are individually:
lotteryviz8
Here they’re laid on top of each other:
lotteryviz7
And this is the same concept but rendered in 3d. The Z axis (depth) relates to time:
lotteryviz6a
And a second image of that, to give more of a sense of depth:
lotteryviz6b
Finally, I’m aware that none of these images are particularly useful in actually analysing the numbers drawn, and deliberately so: I’ve approached this from a visual perspective, rather than a data analysis perspective. Just to show it can be done, though, here’s a straightforward graph of how often each number was drawn in the fifty draws concerned here:
lotteryGraph
I hope that’s been interesting. I’ll put the code for the final piece up in a later post, once I’ve finalised the other eight images! Enjoy!

Posted in Advanced Software unit, code, College, processing, Still images | Tagged , , | 2 Comments

9 Words- One Video

Following on from my recent Nine Images post, here’s a video equivalent. I spent a lot of time thinking up obscure ways to link the nine words (ambiguity, ephemeral, loop, serendipity, utopia, crash, condition, diaphanous, and sequential) with short videos. In the end I decided to be pretty literal with my interpretation of the words and instead to make the presentation of the videos more interesting by combining them into one visual assault.

The videos were filmed on a Mini-DV camera in standard PAL definition, then captured in Premiere. The videos were edited, resized and some have had the speed changed before being exported as an .mp4 using the h.264 codec, which gives the best balance between size and quality in my experience.

It is meant to be silent.

Enjoy!

Posted in Advanced Software unit, College, Video | 1 Comment

Conceptual Sound: Early Sketches I

Here’s some quick sound sketches I put together for our Conceptual Sound unit, as early experiments. All were created using Reaper, a fantastic DAW [digital audio workstation] program available for a ridiculously low price and an insanely generous unrestricted trial. I’ve not used that many DAWs recently but I find it much more intuitive to use than Cubase. These pieces involve a lot of routing audio and MIDI to different tracks, which Reaper makes incredibly easy.

1: VerdantScape
VerdantScape by velvetkevorkian
This is based around a snippet of a sermon I found online- the full version is over an hour long. The main voice track uses a MIDI trigger to send a MIDI note to the synths, one of which has a built in sequencer for the rhythmic pulses. The opening synth pulses are triggered in the same way as the rest even when the vocals are faded out. Here’s a screen grab of the Reaper window showing the track layout, the routing matrix and the effects windows for each track:
VerdantScapeScreen

2: Senseless Violence
SenselessViolenceScape2 by velvetkevorkian
This one is based around a repeated loop of a man speaking with additional sounds dropped on top. There’s a sweeping filter on the loop, and there are also a couple of instances of Reaper’s bundled Avocado glitch generator plugin, which is frankly awesome, and some ping-pong delay to make it a bit more disorientating. Again, some synths are triggered by a MIDI trigger on the glitch tracks. Here’s a screen grab:
SenselessViolenceScreen

3: Crash
CrashScape by velvetkevorkian
Part of the brief for this unit is to use the nine words mentioned in a previous post as inspiration for sound pieces. I took this a bit literally and based this around a loop of a crash cymbal which has been pitch shifted down and slowed down. Again, this triggers some synths and feeds a glitch generator. Obligatory screenshot:
CrashScapeScreen
And there you have it. Please let me know your thoughts in the comments!

Posted in College, Conceptual Sound unit | Tagged | 2 Comments

Nine words, nine images

Here are some images I created for a college assignment based on nine words: ambiguity, diaphanous, condition, crash, ephemeral, loop, sequential, serendipity and utopia. All were inspired by examples from an archived Department of Transport book called “Know Your Traffic Signs”, and use a version of the Transport font from CBRD.co.uk. No, I didn’t know there was a website dedicated to cataloguing roads either.

I hit upon the idea of road signs while I was looking for a consistent thread to tie these ideas together; the meanings of all nine words can have so many interpretations I had to find some way to link each one without being completely literal about it. The brief suggested that the words were related to the artistic process; I guess you could see that as a journey of sorts, and these are markers to guide your path. Maybe it just appealed to my sense of humour. In any case, it is a strong and recognisable visual identity; it’s ubiquity (to people in the UK at least) makes it ideal for a spot of pastiche.

From a technical standpoint, all the signs were copied from the book into Photoshop where they were “adapted”, then saved as GIFs: since there are only a couple of colours in each it reduced the size of file and visual artefacts compared with using JPEGs. I considered vectorising them using Illustrator, but there didn’t seem much point unless I need them printed at a large scale.

ambigous1
Ambiguity: Instructions which make no sense. Is one correct? Are neither? Are both?

diaphanous1
Diaphanous: You can see through it, can’t you?

condition1
Condition: We are conditioned to a lot of strange things in this day and age. Motorways are just one of those things.

crash1
Crash: Could be handy to have a separate lane for accidents?

ephemeral1
Ephemeral: Here today, gone tomorrow…

loop1
Loop: I’m not going to insult your intelligence by explaining this one.

sequential1
Sequential: A series of actions- no options here.

serendipity1
Serendipity: A long and confusing route to an unlooked for happy ending.

utopia1
Utopia: Because obviously, any utopia will have boats. And trains.

Hopefully those will guide you on the roads. Safe journey!

Posted in Advanced Software unit, College, Still images | 8 Comments

Net Art Research, pt III: mrdoob.com

This last site kind of bridges the gap between the two I looked at previously: introducing Mr Doob. His speciality seems to be HTML5 madness and high end JavaScript, but there’s some phenomenal stuff here, some of which is really pushing the boundaries of what’s possible visually on the web while still maintaining something that people will actually want to use.

Harmony is a procedural drawing tool which provides some interesting drawing possibilities; Google Sphere and Google Gravity reconfigure the world’s most famous search engine (make sure you actually try a search in both!). Or So They Say is just a lovely ambient experience, but possibly the best known (although it’s not clear exactly what his role in it was) is The Wilderness Downtown which is a music video for the band Arcade Fire. Built in HTML5, it combines specially filmed footage with content from Google Earth to make something a bit mental. I suspect we’ll be seeing a lot more of this sort of thing as HTML5 support gets better.

You may find a lot of these projects aren’t compatible with all browsers, especially the Google experiments. I recommend SRWare Iron, which is a version of Google’s Chrome browser with all the stuff that talks to Google secretly stripped out. Isn’t open source great?

Posted in Advanced Software unit, College | 1 Comment