Daddy, What's a Mouse?
"Daddy, what's a mouse?"
"It's something that we used to point at objects on a computer screen"
"Just one thing at a time?"
"Yes honey."
"Wow! But how did you do this?"
[She resizes a square with two fingers and then touches the others to propagate the change]
"Well…in the past it was different. First you need to select all the objects you were interested in, by clicking in a space nearby, then dragging an imaginary rubber band around them all. If they weren't next to one another, then you needed to hold down Command on the keyboard while you clicked on each one. Then you would adjust the size of them with a separate control panel at the side of the screen. Or you might size one how you want, then press Command+C to copy, then Command+V to paste the squares…… are you listening?"
"No, sorry Daddy, that's all too technical for me. I don't know how you remembered all that in the old days!"
In the future, our children will all use rich multi-touch devices. They will look at the mouse & keyboard combination in the same way we today look at the Command Line Interface.
Reader Comments (9)
Well put Keith! I think you're right, and I think you hit on the head exactly why the iPad is going to be a great and pioneering device.
I threw some words at it myself but was nowhere near as simple as yours.
http://alsowik.net/blog/2010/1/29/the-ipad.html
Thanks Josh. :-)
Haha, very interesting. I have a feeling that at some point this conversation will happen for real...
Right on! My 8-year old daughter has an iPod touch. The other day, we were watching pictures on my MacBook. She tried hand gestures on the MacBook's screen to zoom. I explained to her that she needs to do that on the trackpad.
From her (and every future user's perspective), if the picture is on the screen, why is the hand gesture made on the trackpad? It didn't make sense to her.
Of course the command line is still useful for some tasks. But I can't see how the mouse will ever be more useful that multitouch.
@Max, I think the mouse may still offer a level of precision that touch may not (at a guess) be able to match, due to fingers being fat. Unless some better interaction is designed. The iWork for iPad seems to have some solutions.
Why would touching other squares propagate the change? Isn't that same interaction completely doable with today's mice? (And if so, shouldn't we be doing it?)
Multi-touch gives us more powerful gestures, compared to a mouse with just a few buttons. (And a mouse with many buttons seems pretty silly!) But it's not yet clear to me how much expressiveness that will buy us. For example, what gesture do you use to adjust the drop-shadow, or fill & stroke colors of a shape? Would we still need some kind of properties-pallet for that?
The iPhone's pan-and-zoom webbrowsing interface can be done with a standard mouse & PC. Google maps has been doing this for years. The scroll-wheel zooms, click & grab scrolls. I hate it. It clashes with muscle memory. It has none of the fun and intimacy of using an iPhone to actually touch a webpage. But as I see it, it's the same interactions & interface. But exposed in a poor way.
I'm excited about multi-touch. Directly touching your work is fantastic. Gestures work very well for some tasks (eg rotating a shape.) But I haven't yet seen an interface paradigm that felt like it was a huge leap -- something that couldn't be inelegantly (though serviceably) done with a mouse, a la google maps.
>Why would touching other squares propagate the change? Isn't that same interaction completely >doable with today's mice? (And if so, shouldn't we be doing it?)
I'm basing this off Apple's demo'd iWorks functionality. It could be done with a mouse, but would invlve a couple of modifiers — one to affect the resizing change, one to affect the multiple selection
>Multi-touch gives us more powerful gestures, compared to a mouse with just a few buttons. (And a ?>mouse with many buttons seems pretty silly!) But it's not yet clear to me how much expressiveness >that will buy us. For example, what gesture do you use to adjust the drop-shadow, or fill & stroke >colors of a shape? Would we still need some kind of properties-pallet for that?
I think so. The idea of a 'menu' is a powerful one. Expecting people to learn and memorize a lexicon of gestures in order to interact is sure to fail.
>The iPhone's pan-and-zoom webbrowsing interface can be done with a standard mouse & PC. >Google maps has been doing this for years. The scroll-wheel zooms, click & grab scrolls. I hate it. It >clashes with muscle memory. It has none of the fun and intimacy of using an iPhone to actually touch >a webpage. But as I see it, it's the same interactions & interface. But exposed in a poor way.
I totally agree. I cannot understand why they don't fix that//change that to work as expected.
>I'm excited about multi-touch. Directly touching your work is fantastic. Gestures work very well for >some tasks (eg rotating a shape.) But I haven't yet seen an interface paradigm that felt like it was a >huge leap -- something that couldn't be inelegantly (though serviceably) done with a mouse, a la >google maps.
That will change, I believe, because now a large number of developers will have access to high quality *large* touch displays. After all, there's only so many fingers you can physically fit on an iPhone/Andriod screen.
Cute post! Love it, except I doubt we will be dropping the mouse and keyboard anytime soon. For leisure and personal computers perhaps, not for business and productivity (even for gaming). Touch has some ergonomic issues, try using touch only device for a days work, either it's vertical or horizontal (like MS surface). Mouse and keyboards are still the best way to interact with a computer.
To make an analogy, I'm sure engineers from the early 1900s would have never guessed that today's cars are still being controlled by the primitive steering wheel.
Thanks for the post!