Thursday, February 23, 2012

Multi-touch: Analysis of Natural Gestures for Controlling Robot Teams on Multi-touch Tabletop Surfaces

Overview

This paper is about a study of users using natural gestures for controlling robot teams with multi-touch input. The authors developed 26 tasks for the users to complete and recorded the gestures that they attempted while trying to perform the tasks. Finally, they classified these gestures (selection, position, rotation, viewpoint, user interface elements) and did some analysis on them resulting in some discussions & conclusions.

Related Work

Common tasks that are performed with a pointer tend to be favored to be executed with one finger. This would make it difficult to use multi points for these common tasks. Since users have learned to perform them with a single point, it would be very difficult to reteach users to use multiple points to perform these same tasks. Additionally, users typically control a computer with a single hand and users would have to learn to use two hands. Koskinen proved this in a study.

Results and Discussion

One interesting finding, is that users did not prefer to use one finger from hand hand. Further, they showed that when users confront an unfamiliar multi-touch UI, they are more willing to use multiple fingers and multiple hands. When multi-touch becomes more widespread, I think that it will be more commonly known to use multiple fingers and multiple hands. Additionally, this pro shows that when users are encountered with real life-like objects in a virtual environment, they are more likely to use real-world type interactions (multi-hand & multi-finger).

Discussion
  • Multi-finger, multi-hand contradiction:
    • It was interesting to see they found a contradiction to the belief that users prefer to use one hand and one finger.
  • Users exploring multi-touch UIs:
    • It was interesting to see the natural reaction to users encountering a multi-touch UI
  • Bias from past learned behavior from mouse pointers:
    • What can be done to unteach users of the single point paradigm? Or, should multi-touch UIs design be changed, from a possibly better UI, to adhere to users familiarity with single point driven interaction?

No comments: