Just a quick update on the amazing ways of visualization and manipulation using Processing with camera feeds.
Direct depth sensor information from the Kinect allows one to achieve results like that seen above. Through SimpleOpenNI, a skeleton is registered for users’ bodies, enabling a great wealth of information for interaction by and with participants.
By sampling the points to get the color information from the Kinect’s camera, one can replace the monotone points with colored ones, allowing for an effect that shows well the great potential and visualization capabilities of the Kinect’s system when used to its fullest ability. Such an effect is seen here:
Bonus video of me nerding out to this cool technology: