My Kinect for Motion Design Explorations (Part 1)

For several weeks now I am exploring the possibilities of the kinect and I am quite excited about the broad range of new possibilties for the future!
In this post I want to share the first experiences that I have made, I hope to help somebody with these!

First of all, I want to express a big thank you to all of the openni / openkinect / primesense developer that have made it relatively easy for a non-developer as me to get started with the kinect - Thank you! Also to my friends Roman and Pelayo of welovecode to get me into the right direction to start things going.

I am an After Effects focused Motion Designer and I enjoy working with trapcode particular, Plexus and Form. That means I think and work mainly in 3D with cameras and artefacts moving around in space. On the other hand I also like cinematography and recording moving images to combine them with motion graphics. Now what exites me most about the kinect is the ability to record 3d-data and to be able to process this data further in after effects.

Since I have learned some Processing before and I also do a lot of expressions in After Effects it was not too dificult to understand how openni and openkinect work. Daniel Shiffman has a great introductory article to get started with the kinect.

So I wanted to record the depth data that comes out of the kinect and i hacked together this little script that works with the simple-openni wrapper




import SimpleOpenNI.*;
SimpleOpenNI kinect;
boolean record = false;

void setup() {
size(640, 480);
kinect = new SimpleOpenNI(this);
kinect.enableDepth(); }
void draw() {
kinect.update();
image(kinect.depthImage(),0,0);
if (record == true ) {
saveFrame("frames/depthmap-####.jpg");
text("Recording frame" + frameCount,10,15); }
}
void keyPressed() {
if (key == 'r') {
record = true;
frameCount = 0; }
else if (key == 's') {
record = false; }
}


With 'r' you start the recording and with 's' you stop it. The image sequence will be recorded into a folder called 'frames' inside processing's data folder. Very basic but it worked for me in the first place.
Then I used the resulting image sequence in after effects as a luma-map to drive trapcode forms z-extrusion, and to be able to use custom particles etc. That turned out to work quite well. What was annoying me was the fact that there was a lot of glitch data due to the suboptimal recording situation in my office.

I found an example sketch by Elie Zananiri in the openkinect library to set a dynamic threshold to record only a part of the depth information and to filter out the rest.

I did not manage to output the depth image with the resulting threshold matte, but I found out that it would be easier for a non-programmer just to export two different image sequences, one with the depth data, and the other one with the "rough alpha" channel.




import org.openkinect.*;
import org.openkinect.processing.*;

Kinect kinect;
int kWidth = 640;
int kHeight = 480;
int kAngle = 15;

PImage depthImg;
int minDepth = 60;
int maxDepth = 860;
boolean record = false;

void setup() {
size(kWidth, kHeight);
kinect = new Kinect(this);
kinect.start();
kinect.enableDepth(true);
kinect.tilt(kAngle);
depthImg = new PImage(kWidth, kHeight);
}
void draw() {
// draw the raw image
image(kinect.getDepthImage(), 0, 0);
if (record == true ) {
saveFrame("frames/depthmap-####.jpg");
println("Recording" + frameCount); }
// threshold the depth image
int[] rawDepth = kinect.getRawDepth();
for (int i=0; i < kWidth*kHeight; i++) {
if (rawDepth[i] >= minDepth && rawDepth[i] <= maxDepth) {
depthImg.pixels[i] = 0xFFFFFFFF;
} else {
depthImg.pixels[i] = 0;
}
}
// draw the thresholded image
depthImg.updatePixels();
image(depthImg, 0, 0);
if (record == true ) {
saveFrame("frames/alpha-####.jpg");
println("Recording" + frameCount);
}
fill(0);
println("TILT: " + kAngle);
println("THRESHOLD: [" + minDepth + ", " + maxDepth + "]");
}
void keyPressed() {
if (key == CODED) {
if (keyCode == UP) {
kAngle++;
} else if (keyCode == DOWN) {
kAngle--;
}
kAngle = constrain(kAngle, 0, 30);
kinect.tilt(kAngle);
}
else if (key == 'q') {
minDepth = constrain(minDepth+10, 0, maxDepth);
} else if (key == 'w') {
minDepth = constrain(minDepth-10, 0, maxDepth);
}
else if (key == 'z') {
maxDepth = constrain(maxDepth+10, minDepth, 2047);
} else if (key =='x') {
maxDepth = constrain(maxDepth-10, minDepth, 2047);
}
else if (key == 'r') {
record = true;
frameCount = 0;
} else if (key == 's') {
record = false;
}
}
void stop() {
kinect.quit();
super.stop();
}



Then I played a little bit around with camera perspectives and particles and this is the resulting animation:



The limitation I see here is that this approach only outputs 255 values of depth while the kinect has a total depth dynamic of 2047 steps. In part two I will show you how I managed to access the full point cloud that comes out of the kinect to use it further in the 3d program cinema 4.

filed under:        



Stuxnet: Anatomy of a Computer Virus by Patrick Clair

An infographic dissecting the nature and ramifications of Stuxnet, the first weapon made entirely out of code. This was produced for Australian TV program HungryBeast on Australia’s ABC1

Direction and Motion Graphics: Patrick Clair
Written by: Scott Mitchell

Production Company: Zapruder’s Other Films.

filed under:        



Richard Stallman of the Free Software Foundation visits "NOW"

Richard Stallman <code> CCCB - foto by daniel.julia </code> flickr

As the final event of NOW’s April edition – “Meetings in the present continuous” – Richard Stallman, the GNU-inventor and founder of the free software foundation, shared his thoughts and beliefs at the CCCB.

I really enjoyed the presentation of this godfather of free software, although it was more a religious mass rather than a lecture: Stallman talked about Free Software, Human Rights and the ideological term “piracy” (“pirates capture ships, not digital stuff”), always trying to convince the audience to switch to Linux. The exorcism-procedures included Microsoft-Bashing, Apple-Condemning and RealPlayer-Insulting.

filed under:        



paint and animate

the installation drawn by zachary lieberman uses both analogue and digital techniques to create an intuitive experience:

drawn takes us back to one of the most simple and ancient expressions of creativity: painting.
in this case, the analog process is connected to a real time digital animation system. figures drawn with pen and ink seem to take on a life of their own and interact with the hands of its creators.

filed under:  



web 2.0 makes easy $$$

finally i found the perfect web 2.0 online application to make easy bucks: bullshitr is a step-by-step introduction on how to make money within the actual web-hype:

1. Devise bullshit-compliant products and services with the Web 2.0 Bullshit Generator™
2. Go to Brownpau’s Buzzphrase Generator for some excellent catchphrases with which to litter your site and marketing materials.
3. Name your new Web 2.0 site with Andrew Woolridge’s Web 2.0 Company Name Generator.
4. Go get yourself a snazzy logo with the Web 2.0 Logo Generator.
5. Lather, rinse, repeat.
6. ???
7. Sell your company to Yahoo!
8. Profit! †

as i followed these advices, my shiny new company will have a logo like this…

Generated Image

i will dedicate myself to create a fancy mashup that can be described as ‘tag-based apps via maps api’ and, based on this, i will make money spreading ‘citizen-media widgets’, due to my higher education and concerns as a media scientist student.

filed under:        



frame by frame

die erfindung ist so simpel wie genial: der in italien lebende juan ospina hat per flash eine web-applikation entwickelt, die die erstellung kleiner (und auch längerer) animationen ermöglicht. flipbook umfasst auch eine galerie, in der sich die besten clips mit einem rating-system anschauen lassen.

filed under: