We have developed a mobile, battery-powered, wireless depth camera based on (and compatible with) Microsoft’s Kinect. In order to promote the use of this device across a wide range of domains, we are making the circuit diagrams and PCB layouts for the additional circuitry available. Our design only uses the front ‘camera’ circuit board of the Kinect, a second bespoke board of the same small size that plugs onto the back of this board in place of the standard large kinect board, which in turn plugs via USB into a Gumstix embedded linux computer running an open source driver and streams via an 802.11n dongle. The design would work equally well with a Raspberry Pi or other SBCs with a bit of hacking.
This work is part of the RCUK Digital Economy PATINA project, for more information see www.patina.ac.uk.
Below is a video of an early version we built last year.
My mom has lived with aphasia ever since she suffered a serious stroke twelve years ago. In the meantime, there’s been a revolution in communication – powered by social media. Like a lot of people, I use the phone less. One of my areas of interest has been bridging the digital “keyboard gap” for people like my mom.
If anyone asks why hack a Kinect, here’s a good video to show them.
Kinect@Home is a place where you can help robotics and computer vision researchers around the world and get 3D models of your room, office or whatever you want in return, right in your browser!
Kinect@Home aims to use your powers to make robots more awesome than ever. Robotics and computer vision researchers need vast amount of images from everyday environments such as homes and offices to improve their algorithms.
The installation demonstrates how the city surface continuously gathers information about people’s movements allowing vehicles to interact with the environment. Kollision developed a real-time graphics engine and the tracking software, which receives live input from 11 Xbox Kinect cameras mounted above the visitors’ heads. Through the cameras the movement of the visitors are processed into patterns of movement displayed on the LED surface.
Inside a spartan garage in an industrial neighborhood in Palo Alto, Calif., a robot armed with electronic “eyes” and a small scoop and suction cups repeatedly picks up boxes and drops them onto a conveyor belt.
It is doing what low-wage workers do every day around the world.
Older robots cannot do such work because computer vision systems were costly and limited to carefully controlled environments where the lighting was just right. But thanks to an inexpensive stereo camera and software that lets the system see shapes with the same ease as humans, this robot can quickly discern the irregular dimensions of randomly placed objects.
The robot uses a technology pioneered in Microsoft’s Kinect motion sensing system for its Xbox video game system.
This project combines the collective talents of musicians, dancers, programmers, designers and animators to create an amazing visual instrument. Creating music through motion is at the heart of this creation and uses the power of the Kinect to capture movement and translate it into music which is performed live and projected on a huge wall. v.co.nz/#the-motion-project
We created and designed the live visual spectacle with a music video being produced from the results. We wanted it to be clear that the technology was real and actually being played live. The interface plays a key role in illustrating the idea of the instrument and we designed it to highlight the audio being controlled by the dancer. Design elements like real time tracking and samples being drawn on as they are played all add to authenticity of the performance. The visuals are all created live and the music video is essentially a real document of the night. assemblyltd.com/
Updated tutorial: Hacking the Kinect – Reverse engineering the Microsoft Kinect. Everyone has seen the Xbox 360 Kinect hacked in a matter of days after our “open source driver” bounty - here’s how we helped the winner and here’s how you can reverse engineer USB devices as well!
USB is a very complex protocol, much more complicated than Serial or Parallel, SPI and even I2C. USB uses only two wires but they are not used as ‘receive’ and ‘transmit’ like serial. Rather, data is bidirectional and differential – that is the data sent depends on the difference in voltage between the two data lines D+ and D- If you want to do more USB hacking, you’ll need to read Jan Axelson’s USB Complete books , they’re easy to follow and discuss USB in both depth and breadth.
USB is also very structured. This is good for reverse engineering because it means that at least the format of packets is agreed upon and you won’t have to deal with check-sums. The bad news is it means you have to have software assistance to decode the complex packet structure. The good news is that every computer now made has a USB host core, that does a lot of the tough work for you, and there are many software libraries to assist.
Today we’re going to be reverse engineering the Xbox Kinect Motor, one part of the Kinect device.
What is Bilibot
Bilibot, from the German word ‘billig’, or cheap, is a sophisticated robotics platform at an affordable price. A Bilibot consists of:
* a powerful computer
* an iRobot create
* a Kinect sensor
* mounting hardware to put it all together
* the ROS Robotic Operating system, with research contributions from roboticists all over the world!
On the 25th of June people in Mainz (Germany) are about to play the classic of all video games on the street: Beamer and xBox Kinect will project the gamescreen on the floor, paddles are controlled by body movement. mediaman-colleagues from Mainz had the idea – and a lot of fun while trying, playing and programming.
To use a Kinect with a computer instead of an Xbox, Watson needed a “driver” (basically a bit of software) that did not exist. He joined a small, far-flung, highly dedicated and technically sophisticated community effort dubbed OpenKinect, which sprang up immediately after the Kinect was introduced, to write the code that would make this possible. At the same time, Adafruit, a hobbyist-focused electronics company based in New York, offered $1,000 to the first person or group to write the necessary code in an open-source format.
At the time — this was shortly before the 2010 holiday season — Microsoft’s primary Kinect focus was the mainstream game-playing market. Its first response to OpenKinect seemed predictable: CNET reported an unnamed spokesperson declaring that the company “does not condone the modification of its products” and would “work closely with law enforcement . . . to keep Kinect tamper-resistant.” Adafruit increased its prize, ultimately to $3,000. Within days a developer in Spain posted videos demonstrating that he made his Kinect work with a P.C. OpenKinect refined and spread the open-source driver code, and a variety of “Kinect hacks,” as they came to be called, proliferated in YouTube videos. (An early example involved a Kinect used to create a version of the hand-swipe control contraption Tom Cruise used in “Minority Report.”) Soon Watson and his wife, Emily Gobeille, posted their own video, in which her hand movements were captured by a Kinect and translated onto a screen displaying a computer-generated bird figure, which she controlled like a high-tech puppet.
Long story about the conflict and tension at Microsoft with the Kinect hacking… One note, we published the USB dump (and example) to get folks started on the open-source driver in addition to the bounty project with Johnny Lee.
Ever since the Kinect emerged on the scene, its depth-sensing camera has fascinated legions of creative coders, but the team behind the RGB+D Toolkit is one of the few attempting to transform the gaming console into a real filmmaking tool. Using a Kinect and a standard DSLR camera, like your Canon 5D, these avant-garde image-makers have created a technique that allows you to map video from the SLR onto the Kinect’s 3D data to generate a true CGI and video hybrid.
Why is this exciting? Well, for one thing, convincing CGI is incredibly difficult to do—it took the team behind Rockstar’s L.A. Noire a full 32 cameras and god knows how many man hours to record and digitally reconstruct their characters in 360 degrees. And while the experimental output from the RGB+D team is a far cry from those painstakingly constructed game visuals, that’s kind of not the point. The point is the implications—this has the potential to change the way we think of 3D filmmaking and to significantly lower the barrier to entry using commercially available hardware and open source software.
Today, members of the RGB+D team—James George and Jonathan Minard—released the culmination of their research to date: an excerpt of an ongoing documentary project called Clouds that they’ve been developing alongside the RGB+D Toolkit, their open source video editing application (which looks like a cross between Final Cut Pro and a video game engine). Clouds features interviews with prominent computer hackers, media artists, and critics discussing the creative use of code, the future of data, interfaces, and computational visuals, presented as a series of conversational vignettes.\
Kinect as a tool of narrative film was as inevitable as sunshine in the summertime.
Space innovators at the University of Surrey and Surrey Satellite Technology Limited (SSTL) are developing ‘STRaND-2’, a twin-satellite mission to test a novel in-orbit docking system based upon XBOX Kinect technology that could change the way space assets are built, maintained and decommissioned.
STRaND-2 is the latest mission in the cutting edge STRaND (Surrey Training, Research and Nanosatellite Demonstrator) programme, following on from the smartphone-powered STRaND-1 satellite that is near completion. Similar in design to STRaND-1, the identical twin satellites will each measure 30cm (3 unit Cubesat) in length, and utilise components from the XBOX Kinect games controller to scan the local area and provide the satellites with spatial awareness on all three axes.
Kreek is a Kinect controlled interface which extends a normally two-dimensional multi-touch environment by the perception of depth. This allows the user to literally reach into the interface and gives applications the possibility to interprete parameters like pressure or solid distance.