It had to happen. When Google showed off a new and revolutionary Gmail Motion control scheme yesterday, it failed to fool most people, but it didn’t fail to catch the attention of some motion control geeks with Kinect cameras on hand. Yep, the FAAST crew that’s already brought us a Kinect keyboard emulator for World of Warcraft has taken Google to task and actually cooked up the software to make Gmail Motion work. All your favorite gestures are here: opening an email as if it were an envelope, replying by throwing a thumb back and, of course, “licking the stamp” to send your response on its way. Marvelous stuff! Jump past the break to see it working, for real this time.
NOTE: the upload date for this video is April 1st, which means that this hack was accomplished within hours of the “Gmail Motion” joke first appearing on Google. Props to the FAAST guys for wasting no time!
MIT’s Robust Robotics Group seems to be as thrilled with the Kinect and the hacking possibilities that emanate therefrom as we are. They’ve attached a Kinect to a quadrocopter, which enables completely autonomous 3-D mapping and flight–even the processing is done on board.
MIT worked with the University of Washington on this project, using UW’s SLAM (Simultaneous Localization And Mapping) algorithms to construct these pretty models of the environment, using the data picked up by the Kinect’s sensors. The SLAM maps are actually kind of a bonus on top of the main function of the project, which is to enable fully autonomous flight in areas without GPS coverage: SLAM maps are processed off site, but they’re not necessary to the operation of the quadrocopter.
For months, we’ve known that Microsoft’s Kinect could help make video games fun. But who knew that it projects such beautiful light?
Until San Francisco Bay Area artist Audrey Penven and some friends started taking pictures of themselves playing Kinect games, no one. But when Penven looked at the images, she realized she was on to something special.
In normal light, you can’t even see the light put out by the Kinect, Microsoft’s new motion control system for the Xbox 360. But with the help of a roommate’s camera, which is modified to shoot infrared, Penven discovered scenes at once ghostly and straight from the cover of a Neal Stephenson novel.
We continue to be shocked by the increasingly ridiculous, often ingenious purposes human beings are putting the Kinect sensor toward. I don’t understand why this is happening, other than the Internet is rad, and it’s put a lot of fascinating, comitted people in contact with one another who have subsequently merged to become Amalgama, she who is The All-God.
You don’t have to wait for any other site to link you to what’s happening, now, you can just go to Kinect Hacks they way they themselves do and (in effect) peer into the future. I don’t just mean the future of content on blogs, I mean the fcking ineffable Goddamn future. You can watch some cyber-shamans writhe until music comes out. And you don’t ever have to stop watching it. You can craft an entire identity around these watchings, and have that be your thing.
This is way, way past Microsoft right now; this is the unplannable event. It isn’t really theirs anymore, not in a real way: the promise is vested. The richness these innovators are tapping into has to do with access to a full bandwidth stream from the device coupled with the power of a modern computer. Could they even capture these uses if they wanted to on the home console? And even if they could, do most people want to inhabit digital sound showers (or whatever is cutting edge at this exact moment)? I don’t mean, should they want to – the answer to that is obviously yes. But do they?
What’s next will be something to see, certainly; not a sideshow version of the prime interface, not the work of some rogue cell, but an experience which assumes gestural control the way rich presence or voice chat is assumed today. I don’t know what that looks like, exactly, and I’m not sure they do either, but I do think that when they look back to 2010 they may find it difficult to stifle their laughter.
Doctors at Sunnybrook hospital in Toronto, Canada have taken interactive gaming to the next level when they hooked up a Kinect console to their medical imaging computer. Now when in the operating room, doctors can have direct access to MRI scans, without having to disinfect, leave the operating room, consult the scans, and then scrub back in. This hack allows them to virtually manipulate the scans and retrieve the necessary information by pulling it up on screen with a wave of their hand.
In the 90′s and early 2000′s, Moore’s law was absolute king. The primary deciding factor in purchasing an electronic product was simply how fast it was. This meant an intense focus on tighter and tighter integration of components and all the functionality was disappearing into tiny little black chips that could not be accessed nor modified by mere mortals. But now, people barely talk about raw “megahertz” or “megabytes” anymore. General purpose computers have gotten “fast enough”. We now want specialized kinds of computers: one that fits in our pocket, plays games in 3D, one shaped like a tablet, one that goes in our car, one that can go under water, or get strapped your snowboard and not break. We have reached a surplus in computing power that makes it affordable to build (and buy) devices for smaller and smaller needs. Our imagination for what to do with computing has simply not kept up with Moore’s Law. So, we find more uses for more modest amounts of computing power. But, what does this have to do with the DIY community?
A byproduct of having such an immense surplus in computing, is that the tools you can buy within a hobbyist budget have also gotten exponentially better in just the past 3-4 years, while the improvement in professional tools have been more modest. The difference in capability between the electronics workbench of a professional engineer and a hobby engineer is getting really really small. Kinect is an overwhelming example of this. The cost of a high quality depth camera dropped nearly 2 orders of magnitude overnight. As a result, hobbyists are out pacing many professionals in the same domain simply due to sheer parallelism. Perhaps not as dramatically, but this is happening with nearly all genres of electronic and scientific equipment. One day, maybe we’ll see backyard DIY electron beam drilling for nano-machining.
When it is no longer about who has the most resources, it’s about who has the best ideas. Then, it becomes a pure numbers game:
Take 10,000 professional engineers vs. 1 million hobbists with roughly equivalent tools. Which group will make progress faster? Now, consider that you have to pay the 10,000 engineers $100K/year to motivate them work, and the 1 million hobbists are working for the love of it. Does that change your answer? Even if it doesn’t, you have to concede that there does exist a ratio which will make the output of these two groups equal. It’s merely a matter of time.
If you follow me through this argument, which I won’t claim to be bullet proof but it explains the trends we are observing quite nicely, then this has an interesting implication on organizations that are currently funding big research groups. When it’s simply a matter of who has the best ideas, it’s tough to try to employ enough people to get good coverage. You could try to spend a lot of energy on trying to find the “best” people, but that’s about as challenging as predicting the stock market. Some inventors simply go “dry” of good ideas and end up not providing a good lifetime return on investment (I fully expect this to happen to me someday. I just hope it happens later rather than sooner.)
So to me, this suggest 3 options for big exploratory organizations…
Yeah, this video is a little weird, but it does discuss a valid idea. If we’re going to have robots in hospitals and the service industry, they are going to have to make contact with humans at some point. How’s that gonna work out?
Also note the “head” of the robot at the end of the video. In the span of less than 6 months, the Kinect has gone from a video game peripheral to a regular tool in robotics labs around the world. Nice!
A Delta-Robot is controlled by a Kinect through Processing and Arduino. The movements of the performer control directly the position of the robot’s effector, and the rotation and opening of the gripper. Once the platform is properly calibrated (still a little rough round the edges!), several autonomous behaviours will be implemented.
Meka Robotics is unveiling this week its Meka M1 Mobile Manipulator, a humanoid system equipped with two dexterous arms, a head with Microsoft Kinect sensor, and an omnidirectional wheel base. The robot runs Meka’s real-time control software with ROS extensions.
Meka, a San Francisco-based start-up founded by MIT roboticists, says the M1 is designed to work in human environments and combines “mobility, dexterity, and compliant force-control.” It seems Meka is targeting first research applications, whereas other companies developing similar manipulators — like pi4 robotics in Germany and Rodney Brooks’ Heartland Robotics — are focusing on industrial uses.
Within weeks after the Kinect hit stores, scientists, programmers and researchers hacked away at the device. It turns out that the Kinect, which consists of cameras and an infrared-light sensor to track and follow your body movement, has applications for medical purposes, language learning and even partying outdoors. Those applications are enabled by a relatively open programming interface which lets people quickly hack together their own custom software to interface with the Kinect hardware.
None of these hacks are officially supported by Microsoft, but they demonstrate the amazing potential of turning the human body into an interface controller. Who’da thunk a gaming gadget would be so powerful?
Microsoft’s Craig Mundie and Don Mattrick recently announced that the company would be releasing a non-commercial SDK for its Kinect 3D motion controller sometime this spring. Today, just seven days later, Belgian 3D interface company SoftKinetic has launched its free SDK for all depth-sensing cameras, including Microsoft’s Kinect.
“We want to expand the community of developers able to access to our professional tools and technology,” said Eric Krzeslo, Chief Strategy Officer of SoftKinetic. “We believe that opening up our cross-platform, multi-camera software to a broader community will enhance productivity and creativity, and we cannot wait to see the incredible innovations that emerge as a result.”
We first took a look at SoftKinetic’s Natural User Interface (NUI) technology back in December when the company began accepting its first participants in its content partner program; and then at CES 2011, we got to see what SoftKinetic’s iisu platform was capable of. Today, interested developers can get their hands on the iisu v.2.8 SDK for free, non-commercial use.
There is a race for free SDKs for the Kinect it seems.
When you buy a new piece of technology – like a phone, or a games console – you most likely unpack it, plug it in, and use it according to the instructions. But there is a particular type of tech-head who would prefer to take it apart, see how it works, and then make it do something else. Scores of enthusiasts did exactly that to Microsoft’s new motion sensing games controller, Kinect, back in November. Marc Cieslak looks at the “modders” pushing devices far beyond their intended capabilities.
Yay! This makes me happy. Microsoft officially announces support for Windows Drivers for the Kinect Camera as a free download in the Spring.
This was something I was pushing really hard on in the last few months before my departure, and I am glad to see the efforts of colleagues in the research wing of Microsoft (MSR) and the XBox engineering team carry this to fruition. It’s unfortunate this couldn’t have happened closer to launch day. But, perhaps it took all the enthusiasm of the independent developer community to convince the division to do this. It certainly would have been nice if all this neat work was done on the company’s own software platform.
I actually have a secret to share on this topic. Back in the late Summer of 2010, trying to argue for the most basic level of PC support for Kinect from within Microsoft, to my immense disappointment, turned out to be really grinding against the corporate grain at the time (for many reasons I won’t enumerate here). When my frustration peaked, I decided to approach AdaFruit to put on the Open Kinect contest. For obvious reasons, I couldn’t run the contest myself. Besides, Phil and Limor did a phenomenal job, much better than I could have done. Without a doubt, the contest had a significant impact in raising awareness about the potential for the Kinect outside of Xbox gaming both inside and outside the company.
Johnny approached us (while at Microsoft, he’s now at Google) and we said we’d help out – so, we reverse engineered the Kinect on our own, started the contest, posted the USB logs to GitHub – and then Hector won. Since we didn’t need to put up all the cash ourselves, a chunk went to the EFF. It was a little spy vs spy for us to keep Johnny safe, and that also made it (more) fun.