Tuesday, February 28, 2012

Monday, February 27, 2012

EZ App Builder

Here is the EZ App Builder Blog that goes with the EZ App Builder website! Find resources for automated app building tools!

Thursday, February 23, 2012

Multi-touch System that I Have Known and Loved

Overview

This paper presents answers from Bill Buxton to some general questions that people asked him. Further, it goes into the history of multi-touch systems dating back to the early 1980’s. A lot of interested parties asked Bill Buxton questions about multi-touch since he has been involved in the topic for a number of years.

Chronology of Systems

There were many interactive devices listed in this paper that were multi-touch systems, but not a standard flat screen device that most people think of when they hear “multi-touch”. One good example is the electroacoustic music device they listed. It was not well implemented, but a device could be created where the input affords the sounds better than a standard keyboard.

Physical vs. Virtual

Bill Buxton was discussing that the virtual devices may not be ideal compared to real physical devices. This is definitely a con when thinking in terms of a flat multi-touch screen. For example, if a user was to play a race car game, a real physical steering wheel, like they have on the Wii would probably be superior than a virtual steering wheel on the flat screen. Another example is an MP3 player that can be paused or volume changed with one hand while the device is still in a pocket. A pure touch screen would prevent such a thing. A pure touch screen MP3 player may cause some problems for someone at the gym versus them having one with physical controls that they can interact with one hand while the device is strapped to their arm.

Discussion
  • Physical vs Virtual
  • Something more than just visual feedback
  • “Everything is best for something, but worst for something else”
  • A screen that can move up & down so that the screen was not flat
    • could get feedback with your eyes closed.

Response to Ripples

Overview

This paper presented Ripples, a Framework to solve some common usability problems with multi-touch devices. Ripples tries to provide a standard design to provide feedback to user interactions. Further, the paper presented more information about this problem & proposed solution and did some experimentation to show that user’s of the multi-touch devices prefer using it with the Ripples effects.

Pros & Cons

A pro for this type of standardized framework is that user’s will have a familiar set of interaction feedback that they can be accustomed to. A con could be that developer’s might be limited to this standard set of feedback. Also, when a developer tries to make a new innovative idea, the typical user that is familiar with Ripples would suffer some confusion.

Experiment

The human experiment they performed showed that people preferred using Ripples versus not using it. This showed that the system is more pleasurable to use for the participants. The discussion about the experiment didn’t go into to much detail about the non-Ripples system that they were comparing it to. Some developers may be able to implement a feedback system that is more pleasurable to use than Ripples. However, having at least the default Ripples was proved to be better than than feedback or the “other” feedback system that they were comparing it to..

Further Development

There was no mention of any sounds for feedback. A set of sounds that are triggered with different interactions would be appropriate for this feedback. MS Windows, Apple Computer & some Smart Phones/Tablets have a standard set of sounds for interactions with the device. It seems like it would be a good idea to standardize a set of sounds for multi-touch interactions.

Discussion
  • Design Principles for Multi-touch systems
  • Custom system feedback may be better in some cases than using Ripples
  • Sound effect feedback

Your Observations of Multi-Touch Usability

Overview

This paper will provide an overview and comparison of MS Surface and 3M Capacitive System.

Ease of Use

Each system has a different user interface. The MS Surface appears to be easier to use and more intuitive than the 3M Capacitive system. It seemed like navigating in the MS Surface was much easier than with the 3M. This may be biased because I spent more time observing the MS Surface.

Interaction Method

The main difference between the two systems is that MS Surface uses a projector to test what & where a finger/hand/object is placed and the 3M Capacitive system uses an electrical conductor to measure the touch with the electricity of the user. This allows for different uses for the two. The 3M Capacitive system can’t detect non-human objects touching the screen; MS Surface can. The interaction method for the MS Surface seemed to be superior than that of the 3M system. The MS Surface appeared to be able to interact with static, non-human objects where the 3M system was limited to human objects. I noticed that some objects were not being detected on the MS Surface but that was obviously an application level bug.

System Feedback

The response from the system caused by a touch seemed to be application specific and varied from app to app rather than system to system. There does not appear to be a noticeable difference between which one is better.

Orientation

The orientation for the 3M was much more apparent & limited than that of the MS Surface. The top/bottom/left/right on the MS Surface could be ambiguous and depend on the user’s perspective. The 3M system has a very clear top/bottom/left/right and these would not change based on the user’s perspective.

System Implications

The technology behind 3M system could be much more powerful than that of the MS Surface. The 3M system could better detect signals from a human body, such as temperature, pressure and potentially generate stress levels that the user is experiencing. This could be used in games or other applications. The MS Surface only measures shadows and this could limit it in some ways.

Discussion
  • Capabilities of Capacitive technology
  • Future of 3M & Surface multi-touch systems

Your Observations of Multi-Touch Usability

Overview

This paper will provide an overview and comparison of MS Surface and 3M Capacitive System.

Ease of Use

Each system has a different user interface. The MS Surface appears to be easier to use and more intuitive than the 3M Capacitive system. It seemed like navigating in the MS Surface was much easier than with the 3M. This may be biased because I spent more time observing the MS Surface.

Interaction Method

The main difference between the two systems is that MS Surface uses a projector to test what & where a finger/hand/object is placed and the 3M Capacitive system uses an electrical conductor to measure the touch with the electricity of the user. This allows for different uses for the two. The 3M Capacitive system can’t detect non-human objects touching the screen; MS Surface can. The interaction method for the MS Surface seemed to be superior than that of the 3M system. The MS Surface appeared to be able to interact with static, non-human objects where the 3M system was limited to human objects. I noticed that some objects were not being detected on the MS Surface but that was obviously an application level bug.

System Feedback

The response from the system caused by a touch seemed to be application specific and varied from app to app rather than system to system. There does not appear to be a noticeable difference between which one is better.

Orientation

The orientation for the 3M was much more apparent & limited than that of the MS Surface. The top/bottom/left/right on the MS Surface could be ambiguous and depend on the user’s perspective. The 3M system has a very clear top/bottom/left/right and these would not change based on the user’s perspective.

System Implications

The technology behind 3M system could be much more powerful than that of the MS Surface. The 3M system could better detect signals from a human body, such as temperature, pressure and potentially generate stress levels that the user is experiencing. This could be used in games or other applications. The MS Surface only measures shadows and this could limit it in some ways.

Discussion
  • Capabilities of Capacitive technology
  • Future of 3M & Surface multi-touch systems

Low-Cost Multi-Touch Sensing through Frustrated Total Internal Reflection

Overview

This paper discusses frustrated total internal reflection as a simple, inexpensive, and scalable technique for enabling high-resolution multi-touch sensing on rear-projected interactive surfaces. In more detail, they go over previous applications, provide implementation details, discuss results from their initial prototype, and outline future directions.

Previous Applications

This technique was used in application such as finger prints since at least the 1960’s, a painting application in the 1970’s, robotics, and other applications. This technique is widely known and has been around for a long time. The principle is also used for fiber optics. This is a pro since the technology is familiar and has been studied and implemented already. This brings down the cost for using this technique.

Implementation Details

Simple imaging techniques such as rectification, background subtraction, noise removal, and connected component analysis are used for each frame of data. Since these algorithms are widely known and used, this is a pro when it comes to implementing this technology. Since this technique is camera-based, this implies that there will be a drawback with various backgrounds and background lighting. This is a con for this technique compared to a capacitor based system that is not affected by background noise.

Results from Initial Prototype

There was a disparity from using a ¼” waveguide for their prototype, but they said there was no reason, other than ease of implementation, to use a smaller one. Their surface became contaminated easily after usage which caused problems. However, with the adaptable background algorithm, it will learn the noise and not consider it as a touch. Also, the system depends on the optical qualities of the object being sensed and this will cause the system to not detect non-human touch objects such as a mug or hand with glove. Also, a user with dry skin will have to push harder on the screen to achieve the success of a user without dry skin. The surface will have to be cleaned periodically to maintain accuracy. Also noted, is that some of these issues could be remedied by engineering a compliant surface overlay.

Future Directions

They discuss upgrading the system to differentiate between different points of contact. Currently, there is no way to distinguish if two finger came from the same hand.

Discussion
  • Low-level algorithms to detect touch, light, background, etc...
  • How does this technique compare to 3M & MS Surface systems?
  • Adaptive algorithms to target user attributes: dry skin vs. non-dry skin

Multi-Touch Surfaces: A Technical Guide

Overview

This paper is a technical guide about multi-touch surfaces and goes into details about all the layers of a multi-touch system. They discuss many details about both the hardware and software of a multi-touch system. The implementation of the hardware for a multi-touch surface is discussed in detail and they also go into detail about the software.

Touch Technologies

They discuss several different technologies used for Touch Technologies including, but not limited to:
  • Resistance based touch surfaces
    • Low power consumption
      • Good for mobile devices
    • Low clarity interactive surface
      • Additional screen protection hurts functionality
  • Capacitance based touch surfaces
    • High clarity
      • Good for multitude of devices
    • Very expensive
    • Very durable
      • Good for public usage
    • Accuracy decreases with multiple objects
      • Firmware usually limits number of touches
  • Surface wave touch surfaces (SAW)
    • Position is structured
  • Frustrated Total Internal Reflection
    • Common algorithms can be used to process input
  • DI
    • Allows tracking & identification of objects
    • Illuminated surface


Byo Multi-Touch Surface

There are libraries that support the Byo Mulit-Touch Surface providing a layer of abstraction that makes it easier to manipulate objects, access camera data, and perform many other useful features:
  • libavg: this supports the full DI pipeline and it is the only one that does so.
  • Multi-touch lib T-Labs: a Java library released under GNU.
  • OpenFTIR: this Framework is under construction and processes frames quickly.
  • TouchLib: provides cross-platform video processing and blob tracking fro FTIR & DI
  • VVVV: a visual software toolkit for rapid development of prototypes for Windows only


System Latency

The discuss issues that cause system latency. In particular, they discuss latency issues related to camera, TouchLib, application, digital projector, and wrap it up with the total system latency.

The camera experiences latency from the sensor picking up light, which is termed as the “integration time”. The “sensor readout time” is the time it takes to transfer data from the sensor to the camera. Also, there is the transfer time from the camera to the computer over the firewire bus. The sensor readout time and the integration time can not happen at the same time, so at minimum it will be the addition of these two times. The library TouchLib causes some latency and was measured at 32 ms for the test that they did. The latency of the projector was measured at 100 milliseconds. Using different projectors came up with nearly the same results except for the 3M DMS 700 which scored the slowest.The total system latency was measured at 225 milliseconds.

Discussion
  • The different software libraries available.
  • The different hardware available.
  • Cost comparison of different systems.

Iterative User Interface Design

Overview

This paper is about general iterative user interface design and it shows the successes of using this process. The improvement in overall usability was shown to be 165% from the first to last iteration, and the median improvement was 38%. They found that there should be 3 iterations since more iterations could actually decrease the usability if the usability engineering process was focused on improving other parameters.

The Benefits of Iteration

Nielsen presents a graph that shows that usability increases directly after each iteration until it eventually hits a plateau. A con is that sometimes after an iteration, usability decreases since new usability problems maybe introduced. The pro is that they can usually be ironed out shortly after. He suggests that interface reconceptualizations have not been studied on projects that were completed by an individual.

An example from later in the paper (Table 5) saw a dramatic decrease Time on Task, Subjective Satisfaction and increase in Errors Made and Help Requests from version 2 to 3 and 3 to 4. However, by version 5 they were all improved by the final version. A big con could be if a project can’t be updated before the final version and it would be better to have version 1 versus having version 3.

From my perspective, iterative design is natural, and projects will follow this without even the designers trying to. Building several different designs and comparing them side by side and then choosing the best will also work, but the chosen version would be improved through an iterative process. Choosing between multiple designs then using iterative improvements on the chosen design, using features from the different versions is the best method for design.

Conclusions

The number of iterations cannot be chosen in advance since the version from iteration 3 might be worse than the first but with two more iteration could be much better than the first. It would be a big con to try to chose the exact number of iterations before hand and a large pro to allow this number of iterations to be dynamic.

Discussion
  • Iterative usability design for individuals instead of teams
  • Alternative to iterative design: iteration is natural
  • Nielsen writes about iPad & touches upon multi-touch in the paper:
    • http://www.useit.com/alertbox/ipad.html

A Procedure for Developing Intuitive and Ergonomic Gesture Interfaces for Man-Machine Interaction

Introduction

This paper is about using gestures for humans to interact with machines. They go over a lot of issues such as culture, the different types of gestures, creating a universal vocabulary, ergonomics & bio-mechanics, approaches to finding gestures, and, finally, they present an experiment.

Culture

A big hurdle in building gesture based systems is that the gestures are generally culture dependent. With a keyboard, which is dependent on a language, it can more easily be translated to be used by another culture. It is not so easy with gestures. A universal gesture language could be created, but there will be a huge learning curve for most, if not all, cultures. One example stated is that a ring made with thumb to index finger means OK in America and it means Money in Japan.

Discussion
  • Creating a non-cultural-biased universal gesture language
  • Practical uses for gesture based man-machine interaction
  • Standard learning for basic computer classes

A Procedure for Developing Intuitive and Ergonomic Gesture Interfaces for Man-Machine Interaction

Introduction

This paper is about using gestures for humans to interact with machines. They go over a lot of issues such as culture, the different types of gestures, creating a universal vocabulary, ergonomics & bio-mechanics, approaches to finding gestures, and, finally, they present an experiment.

Culture

A big hurdle in building gesture based systems is that the gestures are generally culture dependent. With a keyboard, which is dependent on a language, it can more easily be translated to be used by another culture. It is not so easy with gestures. A universal gesture language could be created, but there will be a huge learning curve for most, if not all, cultures. One example stated is that a ring made with thumb to index finger means OK in America and it means Money in Japan.

Discussion
  • Creating a non-cultural-biased universal gesture language
  • Practical uses for gesture based man-machine interaction
  • Standard learning for basic computer classes

Experiences with and Observations of Direct-Touch Tabletops

Overview

This paper goes over the user experiences of direct-touch tabletops. They go into details about aspects such as simultaneous touching, ambiguous input, one-fingered touch, finger resolution, alternate touch input, crowding and clutter, text input, orientation, multi-user coordination, occlusion, ergonomic issues, and mental models. The system used for their experiences and observations is DiamondTouch, which is a direct multi-touch, multi-user table top that also offers the utility of identifying which user is touching which particular location on the surface.

Observations

An observation they made about simultaneous touching was that users hesitated to use table top at the same time as others. This is most likely because user’s are unfamiliar with the shared work space, but the pro is that users will accept it over time.

Another observation they made was about ambiguous touches. User’s simple accidentally touched the table with their wrists, etc...

User’s started off by only using one finger to interact with the devices. However, after some video learning & use of the system, the users began to interact with the device with multiple fingers and hands.

There is some cognitive friction occurring with users finger resolution. User’s have different size fingers, touching with a finger is not as precise as touching with a mouse, and standard windows, etc... are targeted towards a pointer from a mouse rather than a finger. A designer would have to keep this in mind when creating a UI for a multi-touch system. Rather than using standard UI elements found on a traditional computer, UI elements must be different when the user is interacting with the system with a multi-touch rather than a mouse.

Some users preferred to use a stylus or some other abstracted input device. The pro of directly interacting with the system is that there is no extra layer of abstraction and when using the stylus, that layer of abstraction is not gone.

Users preferred to have their “space” when using the system. For multi-user systems, there must be a certain amount of space for the user’s to feel comfortable, but be close enough so user’s can simultaneously “work” together. User’s shouldn’t be close enough as to accidentally touch an object that another user is interacting with.

User’s showed problems when trying to enter text into the machine. An external “real” keyboard should be available for users.

There were little problems found with orientation for small chunks of text but had trouble large chunks of text. They were hesitant to use the mechanism to rotate the text. In the paper, several solutions were proposed to address this issue where each solution might be best for a specific application of use.

Users had problems coordinating with each other either by accident or intentionally. Mechanisms for coordination are cited in the paper.

They noticed that occlusion or shadows from hands for top projection devices showed almost no problems for users.

The physical design of the system such as width & height showed some problems because user’s would accidentally trigger unintended input to the system. They showed that a low (coffee table) surface is better for casual tasks while a higher (desk) surface is better for productivity tasks.

Finally, they found that the users did not view the interactive table top as a computer. This was a pro since users found it more pleasant to use than a traditional desktop computer.

Discussion
  • DiamondTouch’s ability of identifying which user is touching particular location on the surface.
  • Some problems are just because users are unfamiliar with touch surfaces and it will become more “natural” once it is more commonly used.
  • Touch as a standard input that should be taught to children in a basic computer learning course such as a mouse, keyboard, stylus, etc...

Design and Validation of Two-Handed Multi-Touch Tabletop Controllers for Robot Teleoperation

Overview

This paper describes a virtual controller for a multi-touch surface. They then compare this remote control to a physical controller in an experiment that was conducted on a made up replication of a disaster area where they had to locate victims. Finally, the results are explored and presented.

DREAM Controller

The DREAM controller is a multi-hand controller that is similar to a video game controller, but is wrapped around flat on a screen. The controller automatically appears and follows a hand that is placed on the surface. The biggest drawback of this controller versus a physical controller is that there is no immediate feedback. A user would have to memorize the controller and learn that the button was pressed without physically feeling it in their hands as is such with a physical controller. If a user pushes the ‘A’ button on a physical controller with their eyes closes, it is evident. And, with a virtual controller on a surface, it is not evident; the user must trust that the system got the command.

Experiment

There is no mention of making the “training” of using the joystick and DREAM controller. It may be true that the experimenters favored “training” the subjects with the DREAM controller over the physical controller. Since the experimenters are from UMass and the DREAM controller was designed there, they may unknowingly put more emphasis on training people to use the DREAM controller versus the physical controller. A safety net should have been set (it may have been but not mentioned in the paper) that put an equal amount of training and enthusiasm on each of these two methods. With the results being fairly statistically insignificant and this unknown, the experiment & results are a bit skeptical to me.

Results

The experiment was a bit sceptical. There was a very weak significance in the user response (travelling further, finding more victims) to the advantages of using the virtual remote.

Another interesting result was that they were able to travel so far that they lapped around and found the same victim more than once.

Discussion
  • Bias in training operators to use physical controller versus using the DREAM controller
    • The operators may have seen some favoring in the DREAM controller training versus the physical controller that the experiment conductors did unknowingly.
  • The statistical significance was very small: this may show that there is not much of a difference between the two.
  • It would be interesting to see an experiment comparing the DREAM controller to an actual controller that is similar to the controller from a Playstation or XBox.