Showing posts with label multitouch. Show all posts
Showing posts with label multitouch. Show all posts

Saturday, September 6, 2014

Multi-touch System that I Have Known and Loved

Overview

This paper presents answers from Bill Buxton to some general questions that people asked him. Further, it goes into the history of multi-touch systems dating back to the early 1980’s. A lot of interested parties asked Bill Buxton questions about multi-touch since he has been involved in the topic for a number of years.

Chronology of Systems

There were many interactive devices listed in this paper that were multi-touch systems, but not a standard flat screen device that most people think of when they hear “multi-touch”. One good example is the electroacoustic music device they listed. It was not well implemented, but a device could be created where the input affords the sounds better than a standard keyboard.

Physical vs. Virtual

Bill Buxton was discussing that the virtual devices may not be ideal compared to real physical devices. This is definitely a con when thinking in terms of a flat multi-touch screen. For example, if a user was to play a race car game, a real physical steering wheel, like they have on the Wii would probably be superior than a virtual steering wheel on the flat screen. Another example is an MP3 player that can be paused or volume changed with one hand while the device is still in a pocket. A pure touch screen would prevent such a thing. A pure touch screen MP3 player may cause some problems for someone at the gym versus them having one with physical controls that they can interact with one hand while the device is strapped to their arm.

Discussion
  • Physical vs Virtual
  • Something more than just visual feedback
  • “Everything is best for something, but worst for something else”
  • A screen that can move up & down so that the screen was not flat
    • could get feedback with your eyes closed.

Thursday, February 23, 2012

Multi-touch System that I Have Known and Loved

Overview

This paper presents answers from Bill Buxton to some general questions that people asked him. Further, it goes into the history of multi-touch systems dating back to the early 1980’s. A lot of interested parties asked Bill Buxton questions about multi-touch since he has been involved in the topic for a number of years.

Chronology of Systems

There were many interactive devices listed in this paper that were multi-touch systems, but not a standard flat screen device that most people think of when they hear “multi-touch”. One good example is the electroacoustic music device they listed. It was not well implemented, but a device could be created where the input affords the sounds better than a standard keyboard.

Physical vs. Virtual

Bill Buxton was discussing that the virtual devices may not be ideal compared to real physical devices. This is definitely a con when thinking in terms of a flat multi-touch screen. For example, if a user was to play a race car game, a real physical steering wheel, like they have on the Wii would probably be superior than a virtual steering wheel on the flat screen. Another example is an MP3 player that can be paused or volume changed with one hand while the device is still in a pocket. A pure touch screen would prevent such a thing. A pure touch screen MP3 player may cause some problems for someone at the gym versus them having one with physical controls that they can interact with one hand while the device is strapped to their arm.

Discussion
  • Physical vs Virtual
  • Something more than just visual feedback
  • “Everything is best for something, but worst for something else”
  • A screen that can move up & down so that the screen was not flat
    • could get feedback with your eyes closed.

Response to Ripples

Overview

This paper presented Ripples, a Framework to solve some common usability problems with multi-touch devices. Ripples tries to provide a standard design to provide feedback to user interactions. Further, the paper presented more information about this problem & proposed solution and did some experimentation to show that user’s of the multi-touch devices prefer using it with the Ripples effects.

Pros & Cons

A pro for this type of standardized framework is that user’s will have a familiar set of interaction feedback that they can be accustomed to. A con could be that developer’s might be limited to this standard set of feedback. Also, when a developer tries to make a new innovative idea, the typical user that is familiar with Ripples would suffer some confusion.

Experiment

The human experiment they performed showed that people preferred using Ripples versus not using it. This showed that the system is more pleasurable to use for the participants. The discussion about the experiment didn’t go into to much detail about the non-Ripples system that they were comparing it to. Some developers may be able to implement a feedback system that is more pleasurable to use than Ripples. However, having at least the default Ripples was proved to be better than than feedback or the “other” feedback system that they were comparing it to..

Further Development

There was no mention of any sounds for feedback. A set of sounds that are triggered with different interactions would be appropriate for this feedback. MS Windows, Apple Computer & some Smart Phones/Tablets have a standard set of sounds for interactions with the device. It seems like it would be a good idea to standardize a set of sounds for multi-touch interactions.

Discussion
  • Design Principles for Multi-touch systems
  • Custom system feedback may be better in some cases than using Ripples
  • Sound effect feedback

Low-Cost Multi-Touch Sensing through Frustrated Total Internal Reflection

Overview

This paper discusses frustrated total internal reflection as a simple, inexpensive, and scalable technique for enabling high-resolution multi-touch sensing on rear-projected interactive surfaces. In more detail, they go over previous applications, provide implementation details, discuss results from their initial prototype, and outline future directions.

Previous Applications

This technique was used in application such as finger prints since at least the 1960’s, a painting application in the 1970’s, robotics, and other applications. This technique is widely known and has been around for a long time. The principle is also used for fiber optics. This is a pro since the technology is familiar and has been studied and implemented already. This brings down the cost for using this technique.

Implementation Details

Simple imaging techniques such as rectification, background subtraction, noise removal, and connected component analysis are used for each frame of data. Since these algorithms are widely known and used, this is a pro when it comes to implementing this technology. Since this technique is camera-based, this implies that there will be a drawback with various backgrounds and background lighting. This is a con for this technique compared to a capacitor based system that is not affected by background noise.

Results from Initial Prototype

There was a disparity from using a ¼” waveguide for their prototype, but they said there was no reason, other than ease of implementation, to use a smaller one. Their surface became contaminated easily after usage which caused problems. However, with the adaptable background algorithm, it will learn the noise and not consider it as a touch. Also, the system depends on the optical qualities of the object being sensed and this will cause the system to not detect non-human touch objects such as a mug or hand with glove. Also, a user with dry skin will have to push harder on the screen to achieve the success of a user without dry skin. The surface will have to be cleaned periodically to maintain accuracy. Also noted, is that some of these issues could be remedied by engineering a compliant surface overlay.

Future Directions

They discuss upgrading the system to differentiate between different points of contact. Currently, there is no way to distinguish if two finger came from the same hand.

Discussion
  • Low-level algorithms to detect touch, light, background, etc...
  • How does this technique compare to 3M & MS Surface systems?
  • Adaptive algorithms to target user attributes: dry skin vs. non-dry skin

Multi-Touch Surfaces: A Technical Guide

Overview

This paper is a technical guide about multi-touch surfaces and goes into details about all the layers of a multi-touch system. They discuss many details about both the hardware and software of a multi-touch system. The implementation of the hardware for a multi-touch surface is discussed in detail and they also go into detail about the software.

Touch Technologies

They discuss several different technologies used for Touch Technologies including, but not limited to:
  • Resistance based touch surfaces
    • Low power consumption
      • Good for mobile devices
    • Low clarity interactive surface
      • Additional screen protection hurts functionality
  • Capacitance based touch surfaces
    • High clarity
      • Good for multitude of devices
    • Very expensive
    • Very durable
      • Good for public usage
    • Accuracy decreases with multiple objects
      • Firmware usually limits number of touches
  • Surface wave touch surfaces (SAW)
    • Position is structured
  • Frustrated Total Internal Reflection
    • Common algorithms can be used to process input
  • DI
    • Allows tracking & identification of objects
    • Illuminated surface


Byo Multi-Touch Surface

There are libraries that support the Byo Mulit-Touch Surface providing a layer of abstraction that makes it easier to manipulate objects, access camera data, and perform many other useful features:
  • libavg: this supports the full DI pipeline and it is the only one that does so.
  • Multi-touch lib T-Labs: a Java library released under GNU.
  • OpenFTIR: this Framework is under construction and processes frames quickly.
  • TouchLib: provides cross-platform video processing and blob tracking fro FTIR & DI
  • VVVV: a visual software toolkit for rapid development of prototypes for Windows only


System Latency

The discuss issues that cause system latency. In particular, they discuss latency issues related to camera, TouchLib, application, digital projector, and wrap it up with the total system latency.

The camera experiences latency from the sensor picking up light, which is termed as the “integration time”. The “sensor readout time” is the time it takes to transfer data from the sensor to the camera. Also, there is the transfer time from the camera to the computer over the firewire bus. The sensor readout time and the integration time can not happen at the same time, so at minimum it will be the addition of these two times. The library TouchLib causes some latency and was measured at 32 ms for the test that they did. The latency of the projector was measured at 100 milliseconds. Using different projectors came up with nearly the same results except for the 3M DMS 700 which scored the slowest.The total system latency was measured at 225 milliseconds.

Discussion
  • The different software libraries available.
  • The different hardware available.
  • Cost comparison of different systems.

Experiences with and Observations of Direct-Touch Tabletops

Overview

This paper goes over the user experiences of direct-touch tabletops. They go into details about aspects such as simultaneous touching, ambiguous input, one-fingered touch, finger resolution, alternate touch input, crowding and clutter, text input, orientation, multi-user coordination, occlusion, ergonomic issues, and mental models. The system used for their experiences and observations is DiamondTouch, which is a direct multi-touch, multi-user table top that also offers the utility of identifying which user is touching which particular location on the surface.

Observations

An observation they made about simultaneous touching was that users hesitated to use table top at the same time as others. This is most likely because user’s are unfamiliar with the shared work space, but the pro is that users will accept it over time.

Another observation they made was about ambiguous touches. User’s simple accidentally touched the table with their wrists, etc...

User’s started off by only using one finger to interact with the devices. However, after some video learning & use of the system, the users began to interact with the device with multiple fingers and hands.

There is some cognitive friction occurring with users finger resolution. User’s have different size fingers, touching with a finger is not as precise as touching with a mouse, and standard windows, etc... are targeted towards a pointer from a mouse rather than a finger. A designer would have to keep this in mind when creating a UI for a multi-touch system. Rather than using standard UI elements found on a traditional computer, UI elements must be different when the user is interacting with the system with a multi-touch rather than a mouse.

Some users preferred to use a stylus or some other abstracted input device. The pro of directly interacting with the system is that there is no extra layer of abstraction and when using the stylus, that layer of abstraction is not gone.

Users preferred to have their “space” when using the system. For multi-user systems, there must be a certain amount of space for the user’s to feel comfortable, but be close enough so user’s can simultaneously “work” together. User’s shouldn’t be close enough as to accidentally touch an object that another user is interacting with.

User’s showed problems when trying to enter text into the machine. An external “real” keyboard should be available for users.

There were little problems found with orientation for small chunks of text but had trouble large chunks of text. They were hesitant to use the mechanism to rotate the text. In the paper, several solutions were proposed to address this issue where each solution might be best for a specific application of use.

Users had problems coordinating with each other either by accident or intentionally. Mechanisms for coordination are cited in the paper.

They noticed that occlusion or shadows from hands for top projection devices showed almost no problems for users.

The physical design of the system such as width & height showed some problems because user’s would accidentally trigger unintended input to the system. They showed that a low (coffee table) surface is better for casual tasks while a higher (desk) surface is better for productivity tasks.

Finally, they found that the users did not view the interactive table top as a computer. This was a pro since users found it more pleasant to use than a traditional desktop computer.

Discussion
  • DiamondTouch’s ability of identifying which user is touching particular location on the surface.
  • Some problems are just because users are unfamiliar with touch surfaces and it will become more “natural” once it is more commonly used.
  • Touch as a standard input that should be taught to children in a basic computer learning course such as a mouse, keyboard, stylus, etc...

Design and Validation of Two-Handed Multi-Touch Tabletop Controllers for Robot Teleoperation

Overview

This paper describes a virtual controller for a multi-touch surface. They then compare this remote control to a physical controller in an experiment that was conducted on a made up replication of a disaster area where they had to locate victims. Finally, the results are explored and presented.

DREAM Controller

The DREAM controller is a multi-hand controller that is similar to a video game controller, but is wrapped around flat on a screen. The controller automatically appears and follows a hand that is placed on the surface. The biggest drawback of this controller versus a physical controller is that there is no immediate feedback. A user would have to memorize the controller and learn that the button was pressed without physically feeling it in their hands as is such with a physical controller. If a user pushes the ‘A’ button on a physical controller with their eyes closes, it is evident. And, with a virtual controller on a surface, it is not evident; the user must trust that the system got the command.

Experiment

There is no mention of making the “training” of using the joystick and DREAM controller. It may be true that the experimenters favored “training” the subjects with the DREAM controller over the physical controller. Since the experimenters are from UMass and the DREAM controller was designed there, they may unknowingly put more emphasis on training people to use the DREAM controller versus the physical controller. A safety net should have been set (it may have been but not mentioned in the paper) that put an equal amount of training and enthusiasm on each of these two methods. With the results being fairly statistically insignificant and this unknown, the experiment & results are a bit skeptical to me.

Results

The experiment was a bit sceptical. There was a very weak significance in the user response (travelling further, finding more victims) to the advantages of using the virtual remote.

Another interesting result was that they were able to travel so far that they lapped around and found the same victim more than once.

Discussion
  • Bias in training operators to use physical controller versus using the DREAM controller
    • The operators may have seen some favoring in the DREAM controller training versus the physical controller that the experiment conductors did unknowingly.
  • The statistical significance was very small: this may show that there is not much of a difference between the two.
  • It would be interesting to see an experiment comparing the DREAM controller to an actual controller that is similar to the controller from a Playstation or XBox.

Gesture Registration, Relaxation, and Reuse for Multi-Point Direct-Touch Surfaces

Overview

This paper goes over freehand gestural interaction with direct-touch computation surfaces. The paper then goes over the design principles: gesture registration, gesture relaxation, and gesture & tool reuse. They tested the design principles using the annotate gesture, wipe gesture, cut/copy & paste gesture, and pile-n-browse gesture and they conducted these in a user evaluation.

Design Principles

One drawback, mentioned in the gesture relaxation section, was that the height of a table - have it be a coffee table or desk height - may impact the system’s ability to read the gesture. If a user attempts to put down three fingers, the coffee table height while standing may only read the finger tips and the desk table height when sitting down may read the finger tips and much of the rest of the fingers. If these were two different gestures, the system would not be able to differentiate them.

For the “gesture and tool reuse section”, they are hinting towards using the same gesture to mean different things in different contexts. This is similar to having modes, which is typically not a good thing according to most HCI studies. This could confuse a user and have them doing a certain gesture expecting a certain result, but having a different result. However, there must be cases where it is clearly evident that using the same gesture while in a different case would yield an expected result, but designers would have to be careful.

Another note, a user may be in the middle of one gesture, post gesture registration, and have the urge to perform another gesture, without finishing the first gesture. This could cause some confusion amongst users and yield unpredicted results. Possibly, some system feedback could keep user aware that the prior gesture has not finished yet.

It is good that reusing gesture primitives, allowing users to learn a smaller vocabulary, is possible. Designers must be careful to not use these primitives for completely different tasks.

User Evaluation

In the study, the users appeared to have learned the gestures fairly easy, but some struggled with the “piling” gestures. It would be interesting to see if users remembered these gestures after not using them for some period of time.

Discussion
  • A common vocabulary of gesture primitives
    • Using them in different contexts should be for similar tasks and yield similar results. For example, dragging one finger should be used for moving something in all contexts, not resizing in one context and moving the screen in another context. To me, that would be confusing. There should be a common vocabulary where, say, four fingers would drag the screen and not ever move an item on the screen.
  • Memory of gestures
    • It would be interesting to see a study of how users memorize these common gestures and gesture primitives over time. They may become second nature or users would keep having to refer back to some guide.
  • Primitives as letters, and gestures as words
    • It is interesting to compare the gestures & primitives to a full human language
      • Each primitive would be like a letter or word and gestures would be like a sentence. Users would be able to “speak” to the computers with this language.

Multi-touch: Analysis of Natural Gestures for Controlling Robot Teams on Multi-touch Tabletop Surfaces

Overview

This paper is about a study of users using natural gestures for controlling robot teams with multi-touch input. The authors developed 26 tasks for the users to complete and recorded the gestures that they attempted while trying to perform the tasks. Finally, they classified these gestures (selection, position, rotation, viewpoint, user interface elements) and did some analysis on them resulting in some discussions & conclusions.

Related Work

Common tasks that are performed with a pointer tend to be favored to be executed with one finger. This would make it difficult to use multi points for these common tasks. Since users have learned to perform them with a single point, it would be very difficult to reteach users to use multiple points to perform these same tasks. Additionally, users typically control a computer with a single hand and users would have to learn to use two hands. Koskinen proved this in a study.

Results and Discussion

One interesting finding, is that users did not prefer to use one finger from hand hand. Further, they showed that when users confront an unfamiliar multi-touch UI, they are more willing to use multiple fingers and multiple hands. When multi-touch becomes more widespread, I think that it will be more commonly known to use multiple fingers and multiple hands. Additionally, this pro shows that when users are encountered with real life-like objects in a virtual environment, they are more likely to use real-world type interactions (multi-hand & multi-finger).

Discussion
  • Multi-finger, multi-hand contradiction:
    • It was interesting to see they found a contradiction to the belief that users prefer to use one hand and one finger.
  • Users exploring multi-touch UIs:
    • It was interesting to see the natural reaction to users encountering a multi-touch UI
  • Bias from past learned behavior from mouse pointers:
    • What can be done to unteach users of the single point paradigm? Or, should multi-touch UIs design be changed, from a possibly better UI, to adhere to users familiarity with single point driven interaction?

User-Defined Gestures for Surface Computing

Overview

This paper was about a study involving user-defined gestures for surface computing and they discuss developing a vocabulary based on the common gestures that users perform without any outside influence. They performed a study where they would show an action on the screen and the user would try to guess the gesture that caused the action. Further, they created a taxonomy, analyzed the results in detail, made observations, and detailed discussions about the experiment & findings that they discovered.

Developing a User-Defined Gesture Set

One idea presented was that users are not designers. They emphasize that care must be taken when building a gesture set using user defined gestures. Testing on a large number of people, they will get the general consensus as to what is the most popular gesture to cause a specific action, but it may not be the best gesture to cause that specific action. Another interesting point is that the feedback from a computer might change the next gesture, or the next primitive step of a gesture. For example, when a user puts his or her hand down, a ripple appears. If the ripple did not appear, they may have taken a different course. This may impact the experiment, but they stated that they removed feedback.

Discussion
  • It would be interesting to see a study where the subjects were unaware that the testers were looking for them to provide the most logical gesture to cause an action
    • User’s could be giving a UI and told to take make some gesture to cause an action (such as delete, move, cut, etc...) in a “dummy” app, then collect that data-this way the user would naturally try to do some gesture to cause the action
  • Integrating past knowledge - learned from single point device - will bias the most “natural” form of gestures: it would be interesting to see if results are the same when people that are not computer literate were tested.
  • Mixed UIs:
    • Standard computer input along with multi-touch screen: this would be the most practical application for typical computer use. An example would be a small box in the corner with common tasks like cut, copy, paste, etc... (another type of shortcut) or drawing a question mark on the screen for help, drag and drop an item with fingers if a user finds it more convenient than doing it with a mouse, etc...