Thursday, February 23, 2012

Iterative User Interface Design

Overview

This paper is about general iterative user interface design and it shows the successes of using this process. The improvement in overall usability was shown to be 165% from the first to last iteration, and the median improvement was 38%. They found that there should be 3 iterations since more iterations could actually decrease the usability if the usability engineering process was focused on improving other parameters.

The Benefits of Iteration

Nielsen presents a graph that shows that usability increases directly after each iteration until it eventually hits a plateau. A con is that sometimes after an iteration, usability decreases since new usability problems maybe introduced. The pro is that they can usually be ironed out shortly after. He suggests that interface reconceptualizations have not been studied on projects that were completed by an individual.

An example from later in the paper (Table 5) saw a dramatic decrease Time on Task, Subjective Satisfaction and increase in Errors Made and Help Requests from version 2 to 3 and 3 to 4. However, by version 5 they were all improved by the final version. A big con could be if a project can’t be updated before the final version and it would be better to have version 1 versus having version 3.

From my perspective, iterative design is natural, and projects will follow this without even the designers trying to. Building several different designs and comparing them side by side and then choosing the best will also work, but the chosen version would be improved through an iterative process. Choosing between multiple designs then using iterative improvements on the chosen design, using features from the different versions is the best method for design.

Conclusions

The number of iterations cannot be chosen in advance since the version from iteration 3 might be worse than the first but with two more iteration could be much better than the first. It would be a big con to try to chose the exact number of iterations before hand and a large pro to allow this number of iterations to be dynamic.

Discussion
  • Iterative usability design for individuals instead of teams
  • Alternative to iterative design: iteration is natural
  • Nielsen writes about iPad & touches upon multi-touch in the paper:
    • http://www.useit.com/alertbox/ipad.html

A Procedure for Developing Intuitive and Ergonomic Gesture Interfaces for Man-Machine Interaction

Introduction

This paper is about using gestures for humans to interact with machines. They go over a lot of issues such as culture, the different types of gestures, creating a universal vocabulary, ergonomics & bio-mechanics, approaches to finding gestures, and, finally, they present an experiment.

Culture

A big hurdle in building gesture based systems is that the gestures are generally culture dependent. With a keyboard, which is dependent on a language, it can more easily be translated to be used by another culture. It is not so easy with gestures. A universal gesture language could be created, but there will be a huge learning curve for most, if not all, cultures. One example stated is that a ring made with thumb to index finger means OK in America and it means Money in Japan.

Discussion
  • Creating a non-cultural-biased universal gesture language
  • Practical uses for gesture based man-machine interaction
  • Standard learning for basic computer classes

A Procedure for Developing Intuitive and Ergonomic Gesture Interfaces for Man-Machine Interaction

Introduction

This paper is about using gestures for humans to interact with machines. They go over a lot of issues such as culture, the different types of gestures, creating a universal vocabulary, ergonomics & bio-mechanics, approaches to finding gestures, and, finally, they present an experiment.

Culture

A big hurdle in building gesture based systems is that the gestures are generally culture dependent. With a keyboard, which is dependent on a language, it can more easily be translated to be used by another culture. It is not so easy with gestures. A universal gesture language could be created, but there will be a huge learning curve for most, if not all, cultures. One example stated is that a ring made with thumb to index finger means OK in America and it means Money in Japan.

Discussion
  • Creating a non-cultural-biased universal gesture language
  • Practical uses for gesture based man-machine interaction
  • Standard learning for basic computer classes

Experiences with and Observations of Direct-Touch Tabletops

Overview

This paper goes over the user experiences of direct-touch tabletops. They go into details about aspects such as simultaneous touching, ambiguous input, one-fingered touch, finger resolution, alternate touch input, crowding and clutter, text input, orientation, multi-user coordination, occlusion, ergonomic issues, and mental models. The system used for their experiences and observations is DiamondTouch, which is a direct multi-touch, multi-user table top that also offers the utility of identifying which user is touching which particular location on the surface.

Observations

An observation they made about simultaneous touching was that users hesitated to use table top at the same time as others. This is most likely because user’s are unfamiliar with the shared work space, but the pro is that users will accept it over time.

Another observation they made was about ambiguous touches. User’s simple accidentally touched the table with their wrists, etc...

User’s started off by only using one finger to interact with the devices. However, after some video learning & use of the system, the users began to interact with the device with multiple fingers and hands.

There is some cognitive friction occurring with users finger resolution. User’s have different size fingers, touching with a finger is not as precise as touching with a mouse, and standard windows, etc... are targeted towards a pointer from a mouse rather than a finger. A designer would have to keep this in mind when creating a UI for a multi-touch system. Rather than using standard UI elements found on a traditional computer, UI elements must be different when the user is interacting with the system with a multi-touch rather than a mouse.

Some users preferred to use a stylus or some other abstracted input device. The pro of directly interacting with the system is that there is no extra layer of abstraction and when using the stylus, that layer of abstraction is not gone.

Users preferred to have their “space” when using the system. For multi-user systems, there must be a certain amount of space for the user’s to feel comfortable, but be close enough so user’s can simultaneously “work” together. User’s shouldn’t be close enough as to accidentally touch an object that another user is interacting with.

User’s showed problems when trying to enter text into the machine. An external “real” keyboard should be available for users.

There were little problems found with orientation for small chunks of text but had trouble large chunks of text. They were hesitant to use the mechanism to rotate the text. In the paper, several solutions were proposed to address this issue where each solution might be best for a specific application of use.

Users had problems coordinating with each other either by accident or intentionally. Mechanisms for coordination are cited in the paper.

They noticed that occlusion or shadows from hands for top projection devices showed almost no problems for users.

The physical design of the system such as width & height showed some problems because user’s would accidentally trigger unintended input to the system. They showed that a low (coffee table) surface is better for casual tasks while a higher (desk) surface is better for productivity tasks.

Finally, they found that the users did not view the interactive table top as a computer. This was a pro since users found it more pleasant to use than a traditional desktop computer.

Discussion
  • DiamondTouch’s ability of identifying which user is touching particular location on the surface.
  • Some problems are just because users are unfamiliar with touch surfaces and it will become more “natural” once it is more commonly used.
  • Touch as a standard input that should be taught to children in a basic computer learning course such as a mouse, keyboard, stylus, etc...

Design and Validation of Two-Handed Multi-Touch Tabletop Controllers for Robot Teleoperation

Overview

This paper describes a virtual controller for a multi-touch surface. They then compare this remote control to a physical controller in an experiment that was conducted on a made up replication of a disaster area where they had to locate victims. Finally, the results are explored and presented.

DREAM Controller

The DREAM controller is a multi-hand controller that is similar to a video game controller, but is wrapped around flat on a screen. The controller automatically appears and follows a hand that is placed on the surface. The biggest drawback of this controller versus a physical controller is that there is no immediate feedback. A user would have to memorize the controller and learn that the button was pressed without physically feeling it in their hands as is such with a physical controller. If a user pushes the ‘A’ button on a physical controller with their eyes closes, it is evident. And, with a virtual controller on a surface, it is not evident; the user must trust that the system got the command.

Experiment

There is no mention of making the “training” of using the joystick and DREAM controller. It may be true that the experimenters favored “training” the subjects with the DREAM controller over the physical controller. Since the experimenters are from UMass and the DREAM controller was designed there, they may unknowingly put more emphasis on training people to use the DREAM controller versus the physical controller. A safety net should have been set (it may have been but not mentioned in the paper) that put an equal amount of training and enthusiasm on each of these two methods. With the results being fairly statistically insignificant and this unknown, the experiment & results are a bit skeptical to me.

Results

The experiment was a bit sceptical. There was a very weak significance in the user response (travelling further, finding more victims) to the advantages of using the virtual remote.

Another interesting result was that they were able to travel so far that they lapped around and found the same victim more than once.

Discussion
  • Bias in training operators to use physical controller versus using the DREAM controller
    • The operators may have seen some favoring in the DREAM controller training versus the physical controller that the experiment conductors did unknowingly.
  • The statistical significance was very small: this may show that there is not much of a difference between the two.
  • It would be interesting to see an experiment comparing the DREAM controller to an actual controller that is similar to the controller from a Playstation or XBox.

Gesture Registration, Relaxation, and Reuse for Multi-Point Direct-Touch Surfaces

Overview

This paper goes over freehand gestural interaction with direct-touch computation surfaces. The paper then goes over the design principles: gesture registration, gesture relaxation, and gesture & tool reuse. They tested the design principles using the annotate gesture, wipe gesture, cut/copy & paste gesture, and pile-n-browse gesture and they conducted these in a user evaluation.

Design Principles

One drawback, mentioned in the gesture relaxation section, was that the height of a table - have it be a coffee table or desk height - may impact the system’s ability to read the gesture. If a user attempts to put down three fingers, the coffee table height while standing may only read the finger tips and the desk table height when sitting down may read the finger tips and much of the rest of the fingers. If these were two different gestures, the system would not be able to differentiate them.

For the “gesture and tool reuse section”, they are hinting towards using the same gesture to mean different things in different contexts. This is similar to having modes, which is typically not a good thing according to most HCI studies. This could confuse a user and have them doing a certain gesture expecting a certain result, but having a different result. However, there must be cases where it is clearly evident that using the same gesture while in a different case would yield an expected result, but designers would have to be careful.

Another note, a user may be in the middle of one gesture, post gesture registration, and have the urge to perform another gesture, without finishing the first gesture. This could cause some confusion amongst users and yield unpredicted results. Possibly, some system feedback could keep user aware that the prior gesture has not finished yet.

It is good that reusing gesture primitives, allowing users to learn a smaller vocabulary, is possible. Designers must be careful to not use these primitives for completely different tasks.

User Evaluation

In the study, the users appeared to have learned the gestures fairly easy, but some struggled with the “piling” gestures. It would be interesting to see if users remembered these gestures after not using them for some period of time.

Discussion
  • A common vocabulary of gesture primitives
    • Using them in different contexts should be for similar tasks and yield similar results. For example, dragging one finger should be used for moving something in all contexts, not resizing in one context and moving the screen in another context. To me, that would be confusing. There should be a common vocabulary where, say, four fingers would drag the screen and not ever move an item on the screen.
  • Memory of gestures
    • It would be interesting to see a study of how users memorize these common gestures and gesture primitives over time. They may become second nature or users would keep having to refer back to some guide.
  • Primitives as letters, and gestures as words
    • It is interesting to compare the gestures & primitives to a full human language
      • Each primitive would be like a letter or word and gestures would be like a sentence. Users would be able to “speak” to the computers with this language.

Multi-touch: Analysis of Natural Gestures for Controlling Robot Teams on Multi-touch Tabletop Surfaces

Overview

This paper is about a study of users using natural gestures for controlling robot teams with multi-touch input. The authors developed 26 tasks for the users to complete and recorded the gestures that they attempted while trying to perform the tasks. Finally, they classified these gestures (selection, position, rotation, viewpoint, user interface elements) and did some analysis on them resulting in some discussions & conclusions.

Related Work

Common tasks that are performed with a pointer tend to be favored to be executed with one finger. This would make it difficult to use multi points for these common tasks. Since users have learned to perform them with a single point, it would be very difficult to reteach users to use multiple points to perform these same tasks. Additionally, users typically control a computer with a single hand and users would have to learn to use two hands. Koskinen proved this in a study.

Results and Discussion

One interesting finding, is that users did not prefer to use one finger from hand hand. Further, they showed that when users confront an unfamiliar multi-touch UI, they are more willing to use multiple fingers and multiple hands. When multi-touch becomes more widespread, I think that it will be more commonly known to use multiple fingers and multiple hands. Additionally, this pro shows that when users are encountered with real life-like objects in a virtual environment, they are more likely to use real-world type interactions (multi-hand & multi-finger).

Discussion
  • Multi-finger, multi-hand contradiction:
    • It was interesting to see they found a contradiction to the belief that users prefer to use one hand and one finger.
  • Users exploring multi-touch UIs:
    • It was interesting to see the natural reaction to users encountering a multi-touch UI
  • Bias from past learned behavior from mouse pointers:
    • What can be done to unteach users of the single point paradigm? Or, should multi-touch UIs design be changed, from a possibly better UI, to adhere to users familiarity with single point driven interaction?

User-Defined Gestures for Surface Computing

Overview

This paper was about a study involving user-defined gestures for surface computing and they discuss developing a vocabulary based on the common gestures that users perform without any outside influence. They performed a study where they would show an action on the screen and the user would try to guess the gesture that caused the action. Further, they created a taxonomy, analyzed the results in detail, made observations, and detailed discussions about the experiment & findings that they discovered.

Developing a User-Defined Gesture Set

One idea presented was that users are not designers. They emphasize that care must be taken when building a gesture set using user defined gestures. Testing on a large number of people, they will get the general consensus as to what is the most popular gesture to cause a specific action, but it may not be the best gesture to cause that specific action. Another interesting point is that the feedback from a computer might change the next gesture, or the next primitive step of a gesture. For example, when a user puts his or her hand down, a ripple appears. If the ripple did not appear, they may have taken a different course. This may impact the experiment, but they stated that they removed feedback.

Discussion
  • It would be interesting to see a study where the subjects were unaware that the testers were looking for them to provide the most logical gesture to cause an action
    • User’s could be giving a UI and told to take make some gesture to cause an action (such as delete, move, cut, etc...) in a “dummy” app, then collect that data-this way the user would naturally try to do some gesture to cause the action
  • Integrating past knowledge - learned from single point device - will bias the most “natural” form of gestures: it would be interesting to see if results are the same when people that are not computer literate were tested.
  • Mixed UIs:
    • Standard computer input along with multi-touch screen: this would be the most practical application for typical computer use. An example would be a small box in the corner with common tasks like cut, copy, paste, etc... (another type of shortcut) or drawing a question mark on the screen for help, drag and drop an item with fingers if a user finds it more convenient than doing it with a mouse, etc...

Thursday, December 15, 2011

Loading profile failed: An unexpected failure occurred

My Google Checkout account is not loading and Google Wallet is giving the error: "Loading profile failed: An unexpected failure occurred".

I'm assuming this has something to do with the transition from Google Wallet to Google Checkout. This is for my Android App. This happened again about a week ago and I got about a quarter of the sales as usual on the day where this happened. I hope it gets fixed soon.

Friday, December 9, 2011

Loading profile failed: Oops - profile server was unable to process your request

The automatic transition from Google Checkout to Google Wallet is, I believe, causing this error on my account. Google will most likely fix this. If you are having the same problem, I would wait another 24 hours to see if it gets fixed and then contact Google.

Thursday, October 27, 2011

Guess The Letter

Very simple app I made to test the publishing process on Android Market: Guess The Letter. It is an application that displays a new letter upon click of a button. It is good for a parent trying to teach their children to recognize letters. It is free and it does not have any ads on it.

Tuesday, October 18, 2011

8D8Apps

8D8Apps is a venture that I'm pursuing in which I will try to build a company around the development of apps for Android devices.

Friday, September 30, 2011

Tuesday, September 27, 2011

Android Links