Rhino - Daily Detail

Day 14 of my 2015 #dailydetail series.

filed under:          

Diagonal Noise - Daily Detail

Day 11 of my 2015 #dailydetail series.

filed under:                  

Framed - Daily Detail

Day 10 of my 2015 #dailydetail series.

filed under:          

Dark Neon - Daily Detail

Day 8 of my 2015 #dailydetail series.

filed under:            

45 degrees on grey

Day 6 of my 2015 #dailydetail series.

filed under:        

A strange mountain with black artefacts - Daily Detail

Day 5 of my 2015 #dailydetail series.

filed under:        

A Moon of Octane - Daily Detail

Day 2 of my 2015 #dailydetail series.

filed under:        

My Kinect for Motion Design Explorations (Part 1)

For several weeks now I am exploring the possibilities of the kinect and I am quite excited about the broad range of new possibilties for the future!
In this post I want to share the first experiences that I have made, I hope to help somebody with these!

First of all, I want to express a big thank you to all of the openni / openkinect / primesense developer that have made it relatively easy for a non-developer as me to get started with the kinect - Thank you! Also to my friends Roman and Pelayo of welovecode to get me into the right direction to start things going.

I am an After Effects focused Motion Designer and I enjoy working with trapcode particular, Plexus and Form. That means I think and work mainly in 3D with cameras and artefacts moving around in space. On the other hand I also like cinematography and recording moving images to combine them with motion graphics. Now what exites me most about the kinect is the ability to record 3d-data and to be able to process this data further in after effects.

Since I have learned some Processing before and I also do a lot of expressions in After Effects it was not too dificult to understand how openni and openkinect work. Daniel Shiffman has a great introductory article to get started with the kinect.

So I wanted to record the depth data that comes out of the kinect and i hacked together this little script that works with the simple-openni wrapper

import SimpleOpenNI.*;
SimpleOpenNI kinect;
boolean record = false;

void setup() {
size(640, 480);
kinect = new SimpleOpenNI(this);
kinect.enableDepth(); }
void draw() {
if (record == true ) {
text("Recording frame" + frameCount,10,15); }
void keyPressed() {
if (key == 'r') {
record = true;
frameCount = 0; }
else if (key == 's') {
record = false; }

With 'r' you start the recording and with 's' you stop it. The image sequence will be recorded into a folder called 'frames' inside processing's data folder. Very basic but it worked for me in the first place.
Then I used the resulting image sequence in after effects as a luma-map to drive trapcode forms z-extrusion, and to be able to use custom particles etc. That turned out to work quite well. What was annoying me was the fact that there was a lot of glitch data due to the suboptimal recording situation in my office.

I found an example sketch by Elie Zananiri in the openkinect library to set a dynamic threshold to record only a part of the depth information and to filter out the rest.

I did not manage to output the depth image with the resulting threshold matte, but I found out that it would be easier for a non-programmer just to export two different image sequences, one with the depth data, and the other one with the "rough alpha" channel.

import org.openkinect.*;
import org.openkinect.processing.*;

Kinect kinect;
int kWidth = 640;
int kHeight = 480;
int kAngle = 15;

PImage depthImg;
int minDepth = 60;
int maxDepth = 860;
boolean record = false;

void setup() {
size(kWidth, kHeight);
kinect = new Kinect(this);
depthImg = new PImage(kWidth, kHeight);
void draw() {
// draw the raw image
image(kinect.getDepthImage(), 0, 0);
if (record == true ) {
println("Recording" + frameCount); }
// threshold the depth image
int[] rawDepth = kinect.getRawDepth();
for (int i=0; i < kWidth*kHeight; i++) {
if (rawDepth[i] >= minDepth && rawDepth[i] <= maxDepth) {
depthImg.pixels[i] = 0xFFFFFFFF;
} else {
depthImg.pixels[i] = 0;
// draw the thresholded image
image(depthImg, 0, 0);
if (record == true ) {
println("Recording" + frameCount);
println("TILT: " + kAngle);
println("THRESHOLD: [" + minDepth + ", " + maxDepth + "]");
void keyPressed() {
if (key == CODED) {
if (keyCode == UP) {
} else if (keyCode == DOWN) {
kAngle = constrain(kAngle, 0, 30);
else if (key == 'q') {
minDepth = constrain(minDepth+10, 0, maxDepth);
} else if (key == 'w') {
minDepth = constrain(minDepth-10, 0, maxDepth);
else if (key == 'z') {
maxDepth = constrain(maxDepth+10, minDepth, 2047);
} else if (key =='x') {
maxDepth = constrain(maxDepth-10, minDepth, 2047);
else if (key == 'r') {
record = true;
frameCount = 0;
} else if (key == 's') {
record = false;
void stop() {

Then I played a little bit around with camera perspectives and particles and this is the resulting animation:

The limitation I see here is that this approach only outputs 255 values of depth while the kinect has a total depth dynamic of 2047 steps. In part two I will show you how I managed to access the full point cloud that comes out of the kinect to use it further in the 3d program cinema 4.

filed under:        

I have published six new clips

I found some time to update my website and I am happy to present you six new clips, animations, and kinect explorations that I have created this summer.
So check out:

#40 The Sky (lab)

Techmeck (Blinkenlichten/ZDF)

My Brother Recorded On Kinect (also Kinect)

My Kinected Hand (Kinect)

GMO Freshness (lab)

We Are (lab)

filed under: