UI&us is about User Interface Design, User Experience design and the cognitive psychology behind design in general. It's written by Keith Lang, co-founder of Skitch; now a part of Evernote.  His views and opinions are his own and do not represent in any way the views or opinions of any company. 

Navigation
Categories
External Articles

Entries in history (6)

Thursday
Feb182010

Can't Read URLs Some People

This post is a reply to "Some People Can't Read URLs' by Jono of Mozilla. 

The backstory is as follows: ReadWriteWeb had published a piece about Facebook. Through the magic of PageRank, this page became the top Google listing when you searched for "facebook login". Here's the unexpected bit: comments started pouring in declaring this new Facebook redesign to be terrible from people who thought the ReadWriteWeb page WAS Facebook. Some viewers of this saga took this to be a display of the utter stupidy of the majority of Facebook users. Others interpreted it as a demonstration that technology was poorly designed. To my understanding, Jono's piece breaks the problem down into a dichotomy between a simple web for simple users, VS a web with educated users. The solution he proposes would involve educating users about URLs, to result in the latter. I don't quite agree.

I like education. It's a wonderful, empowering, liberating thing. I also like stories. Let's start with one of them.

Twinsong

The year was 700AD. The Angles and the Saxons had been living in Britain for a while, having migrated over from Germany. Apparently everyone forgets about the Jutes. They were there too. But we'll forget about them. Anyway, this bunch spoke their own language: Anglo-Saxon. Though it laid the foundations for modern English, some fundamentals differed;

[From a truly excellent course guide on the history of Anglo-Saxon by Professor Michael D.C. Drout. I highly recommend listening to the Audible version for it's out-loud Anglo Saxon]

In Anglo-Saxon, you could say “dog cat ate,” “ate cat dog,” or “dog ate cat” and still not tell your reader who did the eating and who got eaten. Instead of relying on word order, you would put a tag on who got eaten, so “dog ate cat-ne” and “cat-ne dog ate” and “cat-ne ate dog” would all mean the same thing and would be different from “dog-ne ate cat” or “cat dog-ne ate” (whoever gets the –ne is the one who gets eaten).

Got it? OK. A hundred years later, the Vikings invaded, and stayed. Things got confusing for a while with the Danish and Norwegian native languages intermingling with Anglo-Saxon. Later the French rocked up. Bah. What a mess. Jump in your DeLorean, punch in 2010, and —sssSPOW!— we find that English now uses word order, not tags, to define whom ate who. Dog ate cat. Not cat-ne ate dog. Point being that unless you know the rules, it can be very confusing.

Let's continue the story here in the modern era. I remember Tech journalist Leo Laporte* telling the following story…

Leo wanted to share with his wife his newly created streaming tech-news site. I'm sure he must have been bursting with pride of his accomplishments of duplicating the functionality of a multi-million dollar broadcast station in a single room in his cottage. He instructed his wife to visit live.twit.tv to enjoy the fruits of his labour. But, she accidentally typed into her browser something not quite right — twit.live.tv. After seeing the result she demanded an explanation. I can tell you the site she found is Not Safe For Work and doesn't contain in-depth comparisons of iPad vs the Kindle book reader.

Same problem, different millennium. 

The URL bar seems to me to be the last bastion of CLI that the average joe is forced to deal with. All these slashes and dots. Get one letter or dash wrong, order some words around and it all goes to hell. So, users give up and just type into Google. 

So what the heck does the URL actually do, from a UX perspective?

  1. Lets you go to a particular place on the web by typing something in
  2. Tells you where you are, for example if you've been clicking links and find yourself on a new page, or site
  3. Provides a secure 'where am I' information. If you're browsing in the Western World, facebook.com usually means you're at facebook.com

Now I don't know much about networking. I haven't been following browser/ HTML5/DNS debates. BUT what I do know is that URLs are a weak point in the user experience. In the words of MC Hammer, breakitdown… 

  1. Lets you go to a particular place on the web by typing something in

Bit of a fail here, as we've already discovered above. One of the problems is that the semantics are not understood by average punters.

How about we have the browser help?

 

 The important stuff is big and bold. Much more could be done. Props go to the Mozilla team for the Awesome bar, which is forging ahead in usability. 

 

2. Tells you where you are, for example if you've been clicking links and find yourself on a new page, or site

Fail, because people aren't watching/noticing. I really like the little icons that appear next to the URLs — favicons. I'd love a bigger version. How about we colour the chrome of the browser with it? Facebook uses 'you're leaving Facebook.com' dialogue boxes to let people know they're going out in the Wild West. Effective, but clunky.

 

  3. Provides a secure 'where am I' information

Smells like a Pareto Paradigm to me: 

  • Many people, not just a few (hence the newsworthiness of the story) are accidentally visiting the wrong site
  • There's very few popular browsers that many use: Firefox, Chrome, Safari, IE, Opera.
  • Of all the web's sites, there's a few that many non-savvy surfers use: Facebook, Myspace, Hotmail, GMail, Yahoo, Ebay, Amazon

It follows that the big browsers could have a whitelist for sites they know non-savvy users would be visiting and show some warning for sites that look like they're trying to pretend.

As for Verified sites: Firefox is doing the best job of the browsers I use, showing that a site is verified. FF displays a large green button for 'verified'. But that only means something if you're expecting it to appear. Classic UX design problem: If you don't know, (or forgot) that it's supposed to be there, you won't notice when it's not.

How about a standard symbol that would appear in a browser that a page could reference "you should be seeing this symbol"? Or some better specification for integration between the browser and the secure site itself. 

You May Argue

"It's not that complex to learn how a URL works!"

Ever seen someone double-click on a link to 'open' it? They learnt to double-click on desktop icons in order to open them, and are now applying that rule to the web. And here's the problem: there's a lot of arbitrary technicalities in computing that the users are being asked to learn about. I'm all for the education that Jono suggests. But general, reusable knowledge. Not the inner gears and ratchets of a mechanism conceptually born two decades ago that we call the World Wide Web. 

"If we hide the guts of the internet away from the average user, won't they become docile clueless consumers?" "One day they'll find themselves locked into some DRM encrypted, Apple/Microsoft/Google-only internet."

I like the way Google Reader handles this. The average user may not know anything about RSS, or where to find the appropriate RSS link. Instead, they just plonk the site into Google Reader and it works out the appropriate RSS URL for itself. The RSS standard is not compromised.

"If the user isn't savvy enough to see the huge ReadWriteWeb banner, how are they to notice anything more subtle? AKA "what hope is there for these losers?"

The perceived complexity of the URL bar is a self-fufilling prophecy to failure. People don't understand it, so they don't use it, so they don't notice when it displays something different to what they should expect. 

 

* Interestingly, http://www.leolaporte.com/ is hijacked. If the 'president of the internet' doesn't have a URL, what faith can we have in URLs at all?

 ** Bonus points to anyone who gets the Twinsong reference to the stories

Friday
Jul312009

Heros: Doug Engelbart, Bootstrapping

Respected UX expert and blogger Whitney Hess has just published the latest in her Mentors and Heroes series, featuring yours truly. It's my take on a personal inspiration to me, Douglas Engelbart, Computer pioneer. I'm really happy with how it turned out — an AHA! moment — finding myself with the perspective that Doug's success, and 'failure' was won and lost by the same principle: bootstrapping.

"The realization I had was this: The people who Doug envisioned using his system wanted to do the VERY SAME THING that Doug’s team had done: Bootstrap! The users wanted to leverage what they knew already in the real world, and once inside the machine, learn as they went. The system needed to allow and encourage bootstrapping of *knowledge*."

I included a list of my own conclusions:

  1. Doug had used his background in RADAR, and inspired by Bush’s article As We May Think to imagine a future office worker’s challenges. He, and many talented people around him, worked hard to to bring this idea to fruition, and were DECADES ahead of anything else. Lesson: It is possible to imagine and build the future, if it’s clear in your mind.
  2. Doug and his team believed in Bootstrapping — leveraging what they had to build the systems they needed, then using the improved system to get to the next level. Repeat as necessary. Lesson: Leverage what you have.
  3. The Mother of All Demos changed the computing world forever, but ultimately Doug’s system never was implemented widely. In fact, the patent on the mouse expired before it was ever mass-produced. Lesson: Just because it’s great doesn’t mean people will ‘get it’ or want to buy it.
  4. In the end, most of the talent in Doug’s team was poached byXerox for PARC. Apparently, Doug had some unusual, ultimately unsuccessful ways of managing people at SRI. So they left. Lesson: A team needs to be happy to last.
  5. The impact of the brilliant 1968 demo echoed for decades. Lesson: Demo well.

If you've read this far, then you'll surely enjoy the full article on my hero, Doug Engelbart at Pleasure and Pain.


Click to read more ...

Thursday
Apr232009

A Personal History of Computer Graphics

Some of the guys from Pixar recount in this informal chat how they literally created much of computer graphics in the early 70s, from frame buffers, to alpha layers, while all the time being driven completely by the need to tell good stories.


Click to read more ...

Tuesday
Apr212009

Triumph of the Nerds — Video

I stumbled across the following series by Robert Cringely, which represents a fairly good overview of personal computing history from the Altair, the Alto, Apple I, II and Mac, IBM PC, OS/2, and the various shades of Windows. It was made in 1996, when Apple was looking like a goner.

Ignoring the pink polo shirts, quite a good series on some of the strategic moves which made the industry what it is today.



Click to read more ...

Wednesday
Feb042009

Confronting Technology

the reaction was confusion or disbelief. Many people were apprehensive confronting a telephone for the first time. The disembodied sound of a human voice coming out of a box was too eerie, too supernatural, for many to accept

Ierley, Merritt.
Wondrous Contrivances: Technology at the Threshold. New York: Clarkson Potter, 2002, 112–118.
Mental and Conceptual Models, and the Problem of Contingency: Charles Hannon.


Click to read more ...

Monday
Feb022009

Interfaces and Animation 

Motion can supplement the communication of computers, to good and bad effect. In recent years, simple animation in computer interfaces is becoming less and less costly as GPU power is rockets skyward and new animation APIs require less coder-time to implement. It is therefore a wonderful time to consider utilizing animation as an additional channel for communication. But can simple shapes and icons get across consistent messages through animation? Apparently so. The Orphan Film Symposium examines the work of psychology professors Fritz Heider and Marianne Simmel. In 1944 they created a short film illustrating the emotive potential of basic shapes. The film was shown to test subjects. The research assessed the ability for motion alone to portray emotion or a story in a consistent way — it turned out that motion alone could impart emotion. Subjects reported emotions in the story relating to bullying and other conflict. This is possibly due to the desire to see a human element in many non-human forms, called anthropomorphism Disney's research has also proved to be a gold mine for me; the Disney animation studio spent years mastering the art of making inanimate objects expressive. The following example is taken from the enthralling The Illusion of Life by Frank Thomans and Ollie Johnston.  Used without permission. Will take down on request. Used without permission. Will take down on request. Disney also researched and developed, over many years, fundamental techniques to make animated emotion "feel" right. You can read a basic description of these techniques that John Lasseter in a handout presented to SIGGRAPH titled Principles of Traditional Animation Applied to 3D Computer Animation. These techniques are:

  • Squash and Stretch
  • Timing and Motion
  • Anticipation
  • Staging
  • Follow Through and Overlapping Action
  • Straight Ahead Action and Pose-to-Pose Action
  • Slow In and Out
  • Arcs
  • Exaggeration
  • Secondary Action
  • Appeal
These techniques make each motion more cognitively clear, by accenting either the visual or cognitive distortions of shape and path. For example, here is a particularly clear example of Squash and Stretch bouncing ball animation. There is a wealth of knowledge in the animation industry that could be applied to new interfaces. Of course, there are downsides to motion. Nobody wants an all-dancing, all-prancing interface, with icons zooming around, squishing and spinning excessively. And the use of motion in computer interfaces is exploratory, with limited commonly used gestures — the occasional bouncing icon, flipping settings panel and 'genie' effect, ex. So what might better use of motion look like? What are some 'universal' gestures people would recognize? Have a look at the 10 motions of this experimental interface:   Abstract Social Interface from Keith Lang on Vimeo. Video supplied by Bilge Mutlu. Used with Permission.  The video above is by Bilge Mutlu and his team at the Human-Computer Interaction Institute at Carnegie Mellon. His accompanying research paper  "The Use of Abstraction and Motion  in the Design of Social Interfaces"  explores the emotions and interpretations of an abstract animated interface. Bilge has been kind enough to include video which accompanies this paper. When I look at this video, I can imagine the motions expressing:
  1. The system is waiting for some input or data tranmission
  2. The system is processing information
  3. The system is sleeping
  4. 'Spanner in the works'?
  5. There is something outstanding which requires your attention
  6. I don't know
  7. Configuring…
  8. Finishing configuring
  9. Testing
  10. Hard work
What are your interpretations of the video? You can read the emotions that other people, with considerable consistency, associated with the movement in Bilge's  paper.   Bilge emailed me to add an additional point:
…[just] as Disney developed these principles for creating lifelike motion for their characters, there are techniques for designing abstract motions that are not (necessarily) lifelike, but expressive of emotions and causality. I used "mood boards" to identify shapes, forms, motions that might communicate each emotion (there is an example of a mood board in the paper, which, of course, you can use). Heider and Simmel used abstraction of actual human behavior (you can almost replace the shapes with people). And, I am sure there are other techniques that others have developed. 
So motion can convey meaning, whether it be anthropomorphic (human-like) or not.  Could subtle animation ever be a part of most interfaces? How much motion is too much? Are the current animations good or bad?

Click to read more ...