Wow , nicely done if this is genuine.
Hope someone will be kind enough to make it mac compatible.
Controlling the motors is most likely the easiest part of this puzzle, and the image data he is showing is just recorded off of the television from the Xbox 360. My assumption is that the device does not magically spit out a data matrix with the depth values. The raw image data is probably fed to the Xbox360 at a few Khz and the processing to interpret the data is done there. If this is true, creating the algorithms to properly read, process and interpret this data will be the difficult part of this challenge.
Anyway, great progress so far, cant wait to see what happens next!
Dan I remember reading that most the processing is done on the kinect. Look at the kinect’s teardown on ifixit , there are a ton of chips in that thing. Im guessing most games take up the majority of the 360′s cpus
FYI, the first video is just video from the Xbox 360 Kinect Hub section, and was posted by a different user on the NUI group forums.
I saw something that looked like this over the weekend (forget where), and there were comments that this would NOT be winning the prize – while it seems real, apparently the person stated they have no intention of going “open” and plan on using their work for a commercial project.
However, this might be someone else? (The previous comments about NUI imply it isn’t.)
I agree with Dan. I think I have also read, that the processing work is done by th xbox. What Kinect deliver depends on the SDK you use. You can use the skeleton which the MS SDK is calculating for you or you can use your own software. In that case you will only get an rgb signal and a depth map.