UI&us is about User Interface Design, User Experience design and the cognitive psychology behind design in general. It's written by Keith Lang, co-founder of Skitch; now a part of Evernote.  His views and opinions are his own and do not represent in any way the views or opinions of any company. 

Navigation
Categories
External Articles

Entries in interaction (8)

Saturday
Jan302010

Daddy, What's a Mouse?

"Daddy, what's a mouse?"

"It's something that we used to point at objects on a computer screen"

"Just one thing at a time?"

"Yes honey."

"Wow! But how did you do this?"

[She resizes a square with two fingers and then touches the others to propagate the change]

"Well…in the past it was different. First you need to select all the objects you were interested in, by clicking in a space nearby, then dragging an imaginary rubber band around them all. If they weren't next to one another, then you needed to hold down Command on the keyboard while you clicked on each one. Then you would adjust the size of them with a separate control panel at the side of the screen. Or you might size one how you want, then press Command+C to copy, then Command+V to paste the squares…… are you listening?"

"No, sorry Daddy, that's all too technical for me. I don't know how you remembered all that in the old days!"

 

In the future, our children will all use rich multi-touch devices. They will look at the mouse & keyboard combination in the same way we today look at the Command Line Interface.

Tuesday
Jan262010

Almost Touching the Tablet

UPDATE: I got it wrong. But I think the trend is right. 

Tomorrow…

Apple is rumoured to be announcing a new Tablet device. You probably know this. Rumours of it being shiny and thin (which it probably will be.) How it will be always connected to the internet, and show you books and newspapers and movies on-demand (which it probably can.) How it will have some magical new jaw-dropping interface (which it probably will have.)

But what excites me most is a possible feature that no one seems to have thought of. It's not sexy, and it's something we use everyday on our desktop machines. In fact you probably can't remember computing without it. And yet, I feel it's the key to the future of computing, and without it, the Tablet will not be able to spawn the New Age of Computing. So what's this amazing technology? I'll tell you: mouseOver. You know, the feature whereby links on a page change when you mouse over them, buttons darken and tooltips appear. The subtle interaction that lets you learn more about an interface without committing to anything as serious as a mouse click.

Of course, the Tablet is all about Multitouch -insert choirs of angels- so there's no mouse to be seen. Just a finger or three. So let's call it 'touchOver'. Imagine icons that darken, lighten and pop-out as you waver your finger over them like a tantalising box of fancy chocolates. 

So, why bother to include an interaction feature from the past?

First, let's look at the existing benefits of mouseOver in desktop and web applications:

  1. Users feel more comfortable with unfamiliar interfaces, exploring without the commitment of clicking
  2. The user has feedback which helps them "aim" their cursor

Both of these are valuable. But in multi-touch interface history I see rare mention of support for the touchscreen equivalent of mouseOver. I don't know why—maybe it has been technically difficult to cleanly detect fingertip position as they hover over a touch-surface. Maybe the interaction design was never solved. Maybe I've been looking in the wrong places. Maybe it wasn't deemed necessary.

But. Fast forward to now — see a recently awarded patent to Apple…

[0095]Another potential technique for the determination between "hovering" and "touching" is to temporally model the "shadow" region (e.g., light impeded region of the display). In one embodiment, when the user is typically touching the display then the end of the shadow will typically remain stationary for a period of time, which may be used as a basis, at least in part, of "touching". In another embodiment, the shadow will typically enlarge as the pointing device approaches the display and shrinks as the pointing device recedes from the display, where the general time between enlarging and receding may be used as a basis, at least in part, of "touching"…

…where it seems that Apple now has the technology, the art and the desire to achieve touchOver. Their patent in essence describes an artificially drawn 'shadow' of each fingertip as it hovers over the interface. Here's a very quick mockup I made of how this may look, as applied to iPhone.

touchOver mockup from Keith Lang on Vimeo

So why does touchOver matter so much?

First, I think this will make the touchscreen user experience even better. Less mis-tapped buttons because you have a greater sense of where the device 'thinks' your finger is. More accurate detection of taps because the device knows about your finger position even before you tap. 

Secondly, and more importantly, it serves as a stepping stone to a multitouch proxy device. 

What do I mean by 'proxy device'? Take the mouse for example. You can see a physical 'mirror' of the mouse on the screen at all times — the cursor —that lets you interact without looking at the physical device.

For a multitouch tablet to replace, or at least augment the mechanical keyboard and mouse, there should be a way to let you keep your eyes on the screen at all times. I know of at least one device that works in this way, the Tactapad by Tactiva (never released commercially).

You can watch a movie of the Tactapad in action here. The tactapad uses a video camera looking down on the users hands to generate an artificial silhouette. A sufficiently advanced multitouch trackpad could generate an even more minimalist/clean version. Note: I'm not saying Apple would mimic the tool workflow as per Tactapad, simply that they'd share the idea of proxy manipulation.

The end result is the same.

A device that brings all the benefits of a dynamic multitouch interface to the desktop computing experience.

Caveats

"But touchscreens are so finicky!"

Lay your palm down on many touchscreens and it will register that incorrectly as a touch event. Other Apple patents describe logic to rule this out. In addition they boast the ability to switch between 'typing' mode (all fingers down) 'pointing mode' (one finger down) and drawing mode (three fingers down, like holding an imaginary pencil.) It may be a solvable problem.

"I'd get tired holding my fingers up all day"

Yes, you wouldn't want to hold your fingers 1cm above the desk all day long. I'm sure there is some solution. See above.

"But what about haptics/force feedback?"

Yes, haptics/force feedback may help you 'feel' your way around an interface without looking. I've been lucky enough to play with some lab-quality (read: $$$) haptic interfaces and agree that it's completely possibly to emulate the feel of pressing a phsyical button or pushing around a lump of clay. But those devices were not cheap, not light nor low-power. I'm looking forward to sophisticated haptics in out everyday devices as much as you, but in some years' time.

"I'd never give up programming on my trusty IBM mechanical clunkity-clunk keyboard."

Maybe writers and programmers will stick to using mechanical keyboards forever. Maybe we'll always keep a mechanical keyboard handy. But it will get harder to resist the appeal of a device where everything is under your fingertips… imagine, for example a Swype-like input interface that dynamically changes it's dictionary depending on what application, or even what part of a line of code, you're currently typing in. A truly context-aware device, done in an subtle and sensible way.

"Why hasn't someone done it before?"

Hehe. They said that to the Wright brothers too. Actually, I'd love to mock this up using something like Keymote for iPhone, but it's very difficult without touchOver-like functionality

And yes, Apple predictions are folly. But from my perspective it's simply a question of 'when' and 'by who'. And from my perspective, the answers are 'soon' and 'Apple'. 

Past. Present. Future.

Here's the bit where I'd love your help: Have you seen any examples of touchscreen interfaces working with touchOver like capacity? How did they work? What other problems do you envision?

Is touchOver essential to a rich desktop multitouch experience? I love the fluidity of interfaces like this multi-touch puppetry (via by Bill Buxton) and think touchOver will be essential to move rich interaction like this to mainstream computing. Let me know. :)

 

 

Monday
Dec072009

Radial Menus, Release to Select

Engaget recently covered the release of a new mobile device by Enblaze. I like the radial interface, and the move away from buttons, which in my opinion is one of the weakest interactions of a touch screen. Some downsides of this approach include

  • accidental triggering of features when you lose your grip momentarily
  • Bias or inefficiency (unless it automatically switches) to switch between left and right hands
  • Loss of spatial memory
  • Loss of the visual efficiency in vertical and horizontal grid alignment

Sunday
Jun072009

Physical keyboards are sooo 2009

Lukas Mathis speculates in Virtual Keyboards, Real Keyboards the reason for the iPhone's virtual keyboard:

…I would assume that the reason why Apple went with an on-screen keyboard is not that they thought it afforded a better typing experience than a physical keyboard. They went with the on-screen keyboard because they thought the trade-offs were worth it.

I agree with Lukas here, but I think there's more. Yes, having an input area which can be keyboard, canvas, or aircraft controls is alone enough justification for not including a physical keyboard when text-entry is not key. If you do *have* to type, a real keyboard wins.

BUT! The current iPhone (etc.) keyboard simply copies how mechanical keyboards work. Tap, tap, tap. Darn, I missed the G. The strength of a touchscreen is not its tap detection — if anything, that's the most unreliable part of the interaction. Trying to type fast on the iPhone is like trying to play 'Flight of the Bumblebee' on the Double Bass. Sure you can do it, but it's not a good match. Instead, I'm eagerly anticipating development in alternate touchscreen text-entry approaches combined with the addition of better touchscreens, haptic technologies and new sets of software idioms.

And I don't believe I'm the only one. I think Apple is predicting a near-term future where touchscreen text entry methods actually outperform full-size mechanical keyboards. And that this future is near enough to require them to commence the evolution of their technology, and their users, in order to get there. I believe the aping of the QWERTY physical keyboard is a transitional step.



Click to read more ...

Tuesday
Jun022009

Microsoft Announces 'Natal' 3D System

Microsoft has announced at this years E3 Games conference a new peripheral/system coming next year for the XBox called 'Natal'. They've got some slick prototypes/ studio mockups which show people interacting with games and other applications in a very convincing manner. The technology is based on 3D camera technology which I've previously discussed, and it's good to see it coming to the fore. Microsoft certainly thinks it's a big deal, pulling out Spielberg and Peter Molyneux to talk up the future.

The promotional material implies that they've got some extra processing turning 3D camera bitmap images into models of the human body to be passed to the game itself. Perhaps this processing is the source of the lag between the person and the on-screen action in the video on this page, which I'm guessing is a real prototype. And too much lag and you end up with a cognitively tiring game. The system also boasts speech recognition — I'm skeptical on how effective that will be you're yelling at the screen. Overall, this new 3D system promises awesome new interaction possibilities, but given the huge hype, expect some post-natal depression if it doesn't meet expectations.
UPDATE: Natal now has a website.


Click to read more ...

Thursday
May212009

Innovation in Linux Moblin by Intel

Intel has announced a new OS, Moblin, specifically designed for notebooks and optimized for the atom processor. The Moblin OS is based on Linux, but unlike Linux UI designs I've seen in the past, Moblin takes a fresh approach to managing applications, workspaces and media. It's specifically designed for the small screens of notebooks with pleasing aesthetic, based on what I've seen in the ars technica review and on moblin.org.


Click to read more ...

Tuesday
May122009

Alternative 'TouchType' Text Input Approach

touchtype proposes a new approach to text input on a touchscreen. It takes predictive text to another level — suggesting a list of words which you are probably going to type next. With each letter you input, the list is updated. The application is not yet available, so I've not had an opportunity to play with it, but it does look reasonably good.

However, it seems to me that treating touchscreens like a real-world keyboards is flawed from the start. This is because the strengths of touchscreens is hi-resolution, real time positioning data for any finger dragging along the glass. The weaknesses of touchscreens is that initial point of touch — the system basically has to a) second guess where you meant to touch b) second guess that you mean to tap at all. I would propose an interaction where you only lift your fingers for unusual events, and design the system to be almost totally controlled from dragging on an XY plane. Swype gets closer to this idea of continuous input — we're well overdue for a change, as I've written about before.

via touchusability.com


Click to read more ...

Sunday
Apr192009

Mind-reading Interfaces

The Neural Impulse Actuator, by OCZ Technology, lets you interact with the contraction of various face muscles. From Wikipedia:

The name Neural Impulse Actuator implies that the signals originate from some neuronal activity, however, what is actually captured is a mixture of muscle, skin and nerve activity including sympathetic and parasympathetic components that have to be summarized as biopotentials rather than pure neural signals.


So I'm a bit confused myself on whether to call this a 'mind-reading' interface or a 'muscle-reading' interface. Perhaps it's something in-between. What is clear is this genre of interface appears commercially viable for the mass market; sold first as hardcore gaming interface, later to infiltrate mainstream OS usage. This is a whole new world of Interaction Design.

A nice intro to what I'd call true 'mind-reading' technology in the following 60 Minutes story:


Click to read more ...