UI&us is about User Interface Design, User Experience design and the cognitive psychology behind design in general. It's written by Keith Lang, co-founder of Skitch; now a part of Evernote. His views and opinions are his own and do not represent in any way the views or opinions of any company.
I wanted to share with you a little side project I've recently completed with a good friend. Jigsaw Junior for iPad (iTunes link) is a kid's jigsaw puzzle app we've just released — read more at the official Jigsaw Junior site here.
Definitely got a 'laundry list' of things to improve, so I'd love to hear your feedback to fill in the gaps.
My Dad sent me this video today. Apparently it's been doing the rounds since 2009 but I'd not seen it. The video is from the TV show Ukraine's Got talent and contains eight and a half minutes of astounding 'sand animation' by Kseniya Simonova.
Take a little break and watch the performance of 'The Great Patriotic War' here. Link to large size video, which I recommend. 8:30 long.
There are three exceptional things about this video. One — it's great art — enthralling performance, emotional themes, beautiful imagery. Secondly, the performance itself is technically amazing, yet, apparently the artist has only been doing this for one year. Finally, all of this is being achieved with some plain sand on a flat (backlit) surface. The tools for art don't get much simpler.
And yet…this is exactly the type of real-time, subtle, organic, sensual and fast art I always imagine computers could be capable of. Unlike many swooshy multitouch demos, this is not art for art's sake, instead the animation covers very human topics; one of every four people in the region died in WWII's Eastern Front. And she's using every last creative aspect of sand, from brushing, to finger and palm painting, throwing sand and scraping with the edge of her palm.
Two Hands are Better Than One
So this is how great it can be with some sand. How about some silicon? Matt Gemmell wrote a great piece on iPad application design I enjoyed. On the topic of the iPad's large, multitouch area, he writes…
The important point is that there are other, more obvious ways to accomplish these things; the two-handed input features are conveniences and power-user features. They’re useful and time-saving and possibly discoverable, but they’re not the only way to accomplish those tasks. We’re only just beginning to come to terms with the possibilities of dual-handed input; essential functionality shouldn’t require it yet.
You can see in the video that Kseniya rarely uses two hands. My stopwatch recorded only 1:15 minutes of two-handed use in the eight-and-a-half minute performance. That is, she only uses two hands simultaneously in this performance — 15% of the time. When she does, it's to do something quickly like clear an area. She also seems to use two hands when she's wants to draw symetrically, like the hair at 3:43.
The matter is not that simple though. Many times she switches hands in the performance because she wants to draw on the far left (she appears right-handed) or because she wants a particular shape, or needs to approach from a particular side.
Sometimes she switches for speed, and artistic effect; alternating left and right throws.
Just the Tip(s) of the Iceberg
I love this video because of the richness in the interaction. It's an encyclopaedia of gestures, from a single finger-painting, to multi-finger dabbing, parallel lines with thumbs and middle-inger. French-curve arcs with a palm, broading erasing strokes with the whole hand and intricate air-brush effects with sand released from above. I agree with Matt: we are at the beginning of this whole wonderful adventure. I'm going to keep Kseniya performance in mind as something to strive for. This is a great interface.
UPDATE: I got it wrong. But I think the trend is right.
Tomorrow…
Apple is rumoured to be announcing a new Tablet device. You probably know this. Rumours of it being shiny and thin (which it probably will be.) How it will be always connected to the internet, and show you books and newspapers and movies on-demand (which it probably can.) How it will have some magical new jaw-dropping interface (which it probably will have.)
But what excites me most is a possible feature that no one seems to have thought of. It's not sexy, and it's something we use everyday on our desktop machines. In fact you probably can't remember computing without it. And yet, I feel it's the key to the future of computing, and without it, the Tablet will not be able to spawn the New Age of Computing. So what's this amazing technology? I'll tell you: mouseOver. You know, the feature whereby links on a page change when you mouse over them, buttons darken and tooltips appear. The subtle interaction that lets you learn more about an interface without committing to anything as serious as a mouse click.
Of course, the Tablet is all about Multitouch -insert choirs of angels- so there's no mouse to be seen. Just a finger or three. So let's call it 'touchOver'. Imagine icons that darken, lighten and pop-out as you waver your finger over them like a tantalising box of fancy chocolates.
So, why bother to include an interaction feature from the past?
First, let's look at the existing benefits of mouseOver in desktop and web applications:
Users feel more comfortable with unfamiliar interfaces, exploring without the commitment of clicking
The user has feedback which helps them "aim" their cursor
Both of these are valuable. But in multi-touch interface history I see rare mention of support for the touchscreen equivalent of mouseOver. I don't know why—maybe it has been technically difficult to cleanly detect fingertip position as they hover over a touch-surface. Maybe the interaction design was never solved. Maybe I've been looking in the wrong places. Maybe it wasn't deemed necessary.
[0095]Another potential technique for the determination between "hovering" and "touching" is to temporally model the "shadow" region (e.g., light impeded region of the display). In one embodiment, when the user is typically touching the display then the end of the shadow will typically remain stationary for a period of time, which may be used as a basis, at least in part, of "touching". In another embodiment, the shadow will typically enlarge as the pointing device approaches the display and shrinks as the pointing device recedes from the display, where the general time between enlarging and receding may be used as a basis, at least in part, of "touching"…
…where it seems that Apple now has the technology, the art and the desire to achieve touchOver. Their patent in essence describes an artificially drawn 'shadow' of each fingertip as it hovers over the interface. Here's a very quick mockup I made of how this may look, as applied to iPhone.
First, I think this will make the touchscreen user experience even better. Less mis-tapped buttons because you have a greater sense of where the device 'thinks' your finger is. More accurate detection of taps because the device knows about your finger position even before you tap.
Secondly, and more importantly, it serves as a stepping stone to a multitouch proxy device.
What do I mean by 'proxy device'? Take the mouse for example. You can see a physical 'mirror' of the mouse on the screen at all times — the cursor —that lets you interact without looking at the physical device.
For a multitouch tablet to replace, or at least augment the mechanical keyboard and mouse, there should be a way to let you keep your eyes on the screen at all times. I know of at least one device that works in this way, the Tactapad by Tactiva (never released commercially).
You can watch a movie of the Tactapad in action here. The tactapad uses a video camera looking down on the users hands to generate an artificial silhouette. A sufficiently advanced multitouch trackpad could generate an even more minimalist/clean version. Note: I'm not saying Apple would mimic the tool workflow as per Tactapad, simply that they'd share the idea of proxy manipulation.
The end result is the same.
A device that brings all the benefits of a dynamic multitouch interface to the desktop computing experience.
Caveats
"But touchscreens are so finicky!"
Lay your palm down on many touchscreens and it will register that incorrectly as a touch event. Other Apple patents describe logic to rule this out. In addition they boast the ability to switch between 'typing' mode (all fingers down) 'pointing mode' (one finger down) and drawing mode (three fingers down, like holding an imaginary pencil.) It may be a solvable problem.
"I'd get tired holding my fingers up all day"
Yes, you wouldn't want to hold your fingers 1cm above the desk all day long. I'm sure there is some solution. See above.
"But what about haptics/force feedback?"
Yes, haptics/force feedback may help you 'feel' your way around an interface without looking. I've been lucky enough to play with some lab-quality (read: $$$) haptic interfaces and agree that it's completely possibly to emulate the feel of pressing a phsyical button or pushing around a lump of clay. But those devices were not cheap, not light nor low-power. I'm looking forward to sophisticated haptics in out everyday devices as much as you, but in some years' time.
"I'd never give up programming on my trusty IBM mechanical clunkity-clunk keyboard."
Maybe writers and programmers will stick to using mechanical keyboards forever. Maybe we'll always keep a mechanical keyboard handy. But it will get harder to resist the appeal of a device where everything is under your fingertips… imagine, for example a Swype-like input interface that dynamically changes it's dictionary depending on what application, or even what part of a line of code, you're currently typing in. A truly context-aware device, done in an subtle and sensible way.
"Why hasn't someone done it before?"
Hehe. They said that to the Wright brothers too. Actually, I'd love to mock this up using something like Keymote for iPhone, but it's very difficult without touchOver-like functionality
And yes, Apple predictions are folly. But from my perspective it's simply a question of 'when' and 'by who'. And from my perspective, the answers are 'soon' and 'Apple'.
Past. Present. Future.
Here's the bit where I'd love your help: Have you seen any examples of touchscreen interfaces working with touchOver like capacity? How did they work? What other problems do you envision?
Is touchOver essential to a rich desktop multitouch experience? I love the fluidity of interfaces like this multi-touch puppetry (via by Bill Buxton) and think touchOver will be essential to move rich interaction like this to mainstream computing. Let me know. :)
Business Insider Compiles the best 20 Apple tablet mockups they've found from around the web. Only three have an optical drive, but that's three more than I expected.
Based on your feedback, here is an iteration on my previous design post to better show what applications can accept certain files dragged to them in the Mac OS X Dock.
The Graphical User Interface Gallery contains a tasty article on the development of the Apple Lisa OS. One amazing aspect to me is the iterations they went through before finalising on a simple iconic desktop. The image above is of a non-heirarchial faceted search, presented in what looks like a modern multi-coloumn view. Note though, that the modern multi-coloumn vie we have in Mac OS X for example is simply showing hierarchy, not a faceted search like the above image.
…I would assume that the reason why Apple went with an on-screen keyboard is not that they thought it afforded a better typing experience than a physical keyboard. They went with the on-screen keyboard because they thought the trade-offs were worth it.
I agree with Lukas here, but I think there's more. Yes, having an input area which can be keyboard, canvas, or aircraft controls is alone enough justification for not including a physical keyboard when text-entry is not key. If you do *have* to type, a real keyboard wins.
BUT! The current iPhone (etc.) keyboard simply copies how mechanical keyboards work. Tap, tap, tap. Darn, I missed the G. The strength of a touchscreen is not its tap detection — if anything, that's the most unreliable part of the interaction. Trying to type fast on the iPhone is like trying to play 'Flight of the Bumblebee' on the Double Bass. Sure you can do it, but it's not a good match. Instead, I'm eagerly anticipating development in alternate touchscreen text-entry approaches combined with the addition of better touchscreens, haptic technologies and new sets of software idioms.
And I don't believe I'm the only one. I think Apple is predicting a near-term future wheretouchscreen text entry methods actually outperform full-size mechanical keyboards. And that this future is near enough to require them to commence the evolution of their technology, and their users, in order to get there. I believe the aping of the QWERTY physical keyboard is a transitional step.