UI&us is about User Interface Design, User Experience design and the cognitive psychology behind design in general. It's written by Keith Lang, co-founder of Skitch; now a part of Evernote.  His views and opinions are his own and do not represent in any way the views or opinions of any company. 

Navigation
Categories
External Articles

« Myself on Macjury | Main | Bill Buxton at Mix '09 »
Monday
Mar302009

The Impact on UIs by 3D Cameras

I'm excited by new 3D camera technology coming soon. See for example, this Microsoft demo of the 3D camera in action, Also see the site of the camera maker, with some fantastic video demos of how highly accurate 3D data can be used. Some positives from my perspective:

  • True 3D cameras will make face recognition a lot easier
  • Gesture-driven interfaces can rely on low-latency, absolute 3D data, without the need for CPU-expensive, laggy, image analysis
  • 3D cameras make it much easier in post editing to cut and edit layered images
It seems gesture recognition has been delayed for some time by the compational cost and ambiguitity of data produced through image recognition alogorithms. I see this new 3D hardware being much like the cockroach's 'distributed foot' talked about at 7 1/2 minutes into this great TED talk — with good hardware design you can negate the need for a lot of computation.


And negatives:

  • I have some concerns on the safety of constant IR pulses and our eyes
  • 3D displays are yet to control optical focus point of each pixel making them tiring to watch. This is a tough one.

The Microsoft demo:

Get Microsoft Silverlight

About 7 1/2 minutes into the talk you can see the distributed foot in action, illustrating how good design can reduce the need for cognition.

[UPDATE:]
A more in-depth explanation of the technology and the implications.

How these 3D cameras work: They send out a pulse of Infra Red light, and each photo sensor times the IR pulse's reflection. The sensor also captures the color of light at this pixel, therefore recording color and distance of every single pixel in the image.

What would cheap 3D cameras mean to computing?

  • In gaming, imagine Wii-like games for your whole body. Duck bullets, jump around the room to avoid stepping on landmines (or flowers). Play virtual dodgeball against someone in another city. I'd love a gym with a room dedicated to these kinds of games. Cheap, accurate and available hardware would mean many more software developers, resulting in more innovation and choice
  • Imagine your computer recognizing when you walk in front of it. Or putting itself to sleep when you walk out of a room
  • Imagine a gestural interface, not Minority Report-style, but with your hands resting on a desk or other comfortable surface
  • I envision eye-tracking to improve in accuracy when a computer can see a 3D model of your head, knowing where exactly your eyes are, and at what angle your head is tilted
  • Easy 3D modelling for selling stuff on eBay, getting clothes custom-cut to match your physique, and tracking your bodyshape and emailing your figure to your personal trainer
  • Piece-of-cake removal of your background for video chat. Combine with 3D data for placing an actual 3D model of you in Second Life, etc, World of Warcraft etc
  • Good, cheap 3D data could tell the computer that your wrinkled brow means you're not happy with it deciding to download a Windows Update right now


EmailEmail Article to Friend

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.
Editor Permission Required
Sorry — had to remove comments due to spam.