Thursday
Sep112008
Email Interview with William Van Hecke, Omni Group
Thursday, September 11, 2008 at 2:51AM
Anyone who uses a Mac has probably used one of The Omni Group's applications. William Van Hecke is the User Experience Lead at this well respected and long-lasting independent software House, and I've had the wonderful opportunity to email interview him on the processes of design at Omni.
Hi William, thanks for taking the time to voice your thoughts on UI and us.
Congratulations on the recent Apple Design Award for OmniFocus. With this one adding to four previous ADA’s, what is it about the Omni Group that generates such award-winning software?
It’s a concoction of pretty much every variable about Omni, from the family atmosphere around the office, to the long years of engineering and design experience, to the way employees from every department feel ownership of the products. We just can’t get enough of making stuff, and making stuff better. When we look for new people, we demand that in addition to their expertise, they also bring a personality we actually want to eat lunch and dinner with every day.
William enjoys a nice can of "Smap!"
How do you approach designing User Interactions and interface in general? Could you describe your workflow?
Most products start with a list of features, ranging in concreteness from “we need a badge with the due item count” to “you shouldn’t need to know GTD in order to use this app”. If we’re working on a new version of an existing app, many of the features are born of our enormous bug-tracking database, where we file every bit of customer feedback that comes in. But we have to agree with a request before we incorporate it, no matter how popular it is. Our own internal zeitgeist is the main determiner of where the product goes, because nobody can do their best work on something they’re not really interested in.
A few epic planning meetings, attended by anyone who cares about the app, get the list into manageable shape; then we dive into mockups. I have gotten pretty obsessive about producing nearly pixel-perfect mockups. I take standard controls from our Mac OS X and iPhone interface stencils in OmniGraffle, and create any custom chrome in Photoshop. The result is a pretty huge document with lots of canvases and lots of layers, which we check into Subversion for revision as the app grows.
Once things start showing up in the built app, there is a ton of email back and forth and office drop-ins to figure out the wording of dialogs, the positioning of buttons, and so on. Much of the satisfaction of designing apps is in these little iterative design sessions.
To take a recent example, some of us found that in OmniFocus for iPhone, the screen for moving an action is rather confusing. I sent around a screenshot of how it is, side by side with a mockup for how it could be better, and a list of arguments for why it should change. A flurry of emails went around the UI team, with what people liked and disliked about the proposal. This continued until everyone agreed (or admitted that they didn’t feel strongly enough to keep fighting). Then I filed a bug in our database, attached the final mockup, and gave it to the the engineering team. In a matter of a couple of hours of work, we’ve made our product tangibly better. I just can’t get enough of that.
Of course, lots of stuff is decided on the fly and just stays that way, because we have engineers who know and care about usability. Often, the UI team doesn’t touch the wording or layout of an interface at all. The need for some dialog becomes apparent during coding, the engineer carefully writes it and lays it out, and it goes right in.
Many first-timers will reach for the Mac UI Guidelines to guide them in designing their applications. What are your thoughts on Apple’s UI Guidelines, and where could they improve, if at all?
I have actually been told by Apple UI designers that the HIG is for developers who don’t really know what they’re doing: if you’re a Windows developer porting your app to the Mac, or a hardware manufacturer building a tool to talk to your device, yes, please follow the HIG religiously!
But for dedicated Mac developers, the HIG is just a very nice set of guidelines for when we’re stuck on a detail. Most of the more consequential design decisions require us to use common sense, or come up with something creative, or take inspiration from elsewhere (like web-based interfaces, or physical objects).
The Apple Style Guide is lesser-known, but I find myself turning to it even more often than the HIG. So much of user experience design is in clear, consistent writing; we spend a lot of time and energy on it. The Apple Style Guide has plenty of good advice on how to write both in-app text and documentation.
What features or UI approaches of your applications do you wish other developers would borrow from?
The most under-recognized aspect of Omni software is how extensible it is. We don’t do enough to show off the scripting, exporting, custom-data, and other such customizability that our apps have. I regularly use Python (with appscript) to do things like getting OmniGraffle to draw graphs based on my last.fm data, or exporting my OmniOutliner book list to Bookpedia, or telling OmniWeb to do a Google “I’m Feeling Lucky” search when I type a phrase into its address field.
Scriptability and other extensibility hooks make an app orders of magnitude more useful. Simultaneously they reduce the pressure on us to include every odd feature that gets requested: if we think it’s not important to the majority of our customers, we can instead offer a script for people to add to their toolbars, or teach them how to create their own custom data fields.
As applications grow in complexity, it seems we end up with more and more floating palettes. You have taken additional efforts to make managing floating palettes easier. What are your thoughts on the use of floating palettes and the single-window vs multi-window approach?
I do love floating inspectors, mainly because of the way I can summon them, do my thing, and then dismiss them, without affecting the geometry of my content area. If the app offers enough ways to manipulate stuff without opening the inspectors at all, that’s even better. In OmniGraffle 5, we introduced a mini-inspector bar, which lives in the top ruler and offers the most basic controls for editing objects; this keeps the inspector windows as “a sometimes food”.
But in apps like, say, CSSEdit, I think it makes perfect sense for the inspectors to be a part of the main content window. In that wonderful app, the attributes are the content, and the layout is not nearly as important.
The NextSTEP OS
Mac OS X has benefitted from the clean slate of NeXTSTEP. In many ways, the iPhone is this clean slate all over. What is it like to develop UI on the iPhone, and has it inspired changes to your desktop applications?
I am so excited about the iPhone platform. To me it feels a lot like when the Mac platform was first being explored: the 512×384-pixel 1-bit screen, the crazy new input device called a “mouse”, and the fun drag-and-drop atmosphere of the OS itself all made for a thriving and thrilling environment of software.
Those guys had to figure out how to get away from the unbounded complexity of command lines, with any number of possible commands and parameters; instead they had to fit their funcitonality on the screen in a way that made sense when you looked at it. Now we have to make software that offers just a (literal) handful of options at a time, and makes sense when you touch it. Of course, people make brilliant things when working within constraints, and what we’re seeing on iPhone is a fine exemplification of that.
As a side note, your company initially developed for the NeXT platform. Is there anything you miss from the original NeXTSTEP OS?
(I’ll have to leave this to Andrew and Ken; I’ve been a Mac fanatic since age 5!) [Ed note: hopefully soon]
iTunes handles music, iPhoto handles photos. Do you see your apps taking on more management of their own files in the future?
OmniFocus works in the library fashion of those apps (though it can also open other OmniFocus databases as documents if you want it to). For apps that are about creating documents, though, I still think there’s a whole lot of value in letting the user freely organize files on the disk without any regard to which application they came from.
What are your thoughts on OpenCL? Will all future applications boast realtime "smoke"? 3D?
Anything that makes it easier to offer smooth transitions between states is a boon to usability. I am dreaming of a system where nothing on the screen instantly appears, disappears, or changes; everything should have some sort of transition, even if it’s usually just a minimal ⅛-second slide. But, that’s mostly in the realm of LayerKit. Er, I mean, CoreAnimation!
From what I understand so far, OpenCL seems to promise more exciting activity behind the scenes, allowing the guys in the engineering wing to say “yeah, we can do that” to more and more ambitious stuff. That’s exciting when it means we can make the software do combinatorially complex stuff rather than asking the user to do it: laying out really complicated OmniGraffle diagrams, for instance.
As for truly superfluous visual effects, I wouldn’t mind having somewhat less of that. Software should be fun, but that should be because it is reliable and the interactions are solid and satisfying. Decals don’t make your car run better.
In many ways the basic metaphors of window management, copy and paste and the like hasn’t changed in 20 years. Are we due for a change?
I’m no visionary, but it seems to me that starting over completely on the desktop won’t be worth it until we have some really mind-blowing hardware advancements.
My fantasy interface is a heavy wooden desk that contains a bunch of finely crafted pens, high-quality paper, rulers, and other such tools. Some kind of technology on the level of magic (probably nanotech) monitors how you manipulate those objects and maps your interactions to the benefits of digital storage, versioning, network connectivity, and so on.
When you write words on a page, they are also stored on a disk somewhere. When you drop a letter in your outbox, the data it contains is instantly transmitted to the addressee, while the physical paper it occupies is wiped clean or molecularly disassembled to be rebuilt as some other document later. So, yeah, that’s probably a long way off, and even with all of that, one of the tools in the desk would probably be a Das Keyboard or a Model M, configured for the Dvorak layout; keyboards are a really good text input method. :)
Alan Kay famously said “People who are really serious about software should make their own hardware.” How does desktop hardware (ever-present mouse, keyboard) impact the UI work you do?
Really direct manipulation sure would be nice. Shoving a hunk of plastic around on my desk, or hovering a pen over a tablet, is close. But every now and then I just want to grab the thing that’s on screen and make it do the thing I’m trying to do, without abstracting my intentions through the device to a simple set of (x, y) coordinates and a boolean for whether the button is down. Multi-touch might be somewhat better, but while you’re gaining the directness of touching the representation, you lose the contact with a three-dimensional object. That’s a trade-off that works on iPhone and probably won’t work on a desktop system, where you need the precision of some kind of pointing device.
I envy physical product designers, who don’t have to worry about the abstraction—instead they’re figuring out the resistance of buttons and the weight and ergonomics of objects, not how to subtly shift around billions of bits of information just the way a user wants. On the other hand, I recently talked to a product designer who’s seduced by the apparent excitement of software user experience design!
UI and us is focussed on the commonality of people in relation to design. What have you learnt about people that could be applied across all modes of interaction? For example in gesture, information bandwidth, and learning processes?
People are generally smart, and they’re much more capable of learning interfaces than we give them credit for. There is something to be said for interfaces that are immediately and completely understandable, but I think that that trips us up sometimes so that we design for the first run rather than for the 1,000 runs that follow.
You can’t drive a car or play a guitar very well the first time you try, but people are willing to learn the art of using those tools to their fullest potential. Of course, the basics are easy to grasp, and you can get the car to move forward or the guitar to make some noise within your first few seconds of trying. With software there’s this myth that because it’s possible to do a lot with it right away, you should be able to do everything with it right away, without thinking or learning anything at all. I would like to create tools that require no learning to use them in the most basic way, but which reward patience and attention by empowering you to do something really spectacular.
In that way I guess we at Omni apply the 90 percent solution instead of Apple’s suggested 80 percent solution: We want to make something that satisfies almost everyone, and then goes on to satisfy half of the remaining geeks and patient learners and enthusiastic, creative folks who want to use our software for something we didn’t expect.
More broadly, what are some little-known current, or past reseachers or innovators that have inspired you that others may not be aware of?
Of course, I have tremendous respect for well-known experts like (to name the first few people who come to mind) Donald Norman, Edward Tufte, and the folks at 37signals.
But I’ll latch on to your “little-known” descriptor here, so bear with me: I’m deeply inspired by Japanese video games, especially from cultish companies like GUST, Nippon Ichi, Flight-Plan, Red Company, and Atlus. Their games marry the extremely sophisticated Japanese aesthetic sense I admired while living in Tokyo, the necessarily simplified interactions of console games, the benefits of listening very closely to your audience, and the passion of intense creativity. There’s a nearly inexhaustible well of blindingly good design and inspiration to study there.
The other area that comes to mind is that of popular science writing. Authors like Richard Dawkins, Matt Ridley, Brian Greene, and Steven Pinker somehow take the mind-bending yet crucial ideas of science and make them understandable to bozos like me. Sure, user experience design involves a good deal of writing, and the more good writing we read, the better we write. But beyond that, the way those writers organize and present their mountans of information is a fine model for how we present our own work: they display confidence that the material is worth offering, and trust in the “end user” to find the bit that’s meaningful to them and to extract a reward from it.
To close, is there anything you’d like to share about computer UI design or say to the UI design community in general?
Never stop making cool stuff! Never stop being super nice people.
Many thanks go to William for taking the time to answer my questions in such a comprehensive and entertaining way. I look forward to many more exciting cool stuff from The Omni Group. :-)
Reader Comments