Print

On October 5th, we participated in the Student Innovation Contest at the 2009 User Interface Software and Technology conference (UIST) in Victoria, BC.  Student teams were given about a month to develop a novel use for a pressure sensitive keyboard developed by Microsoft Research, and all the entries were demonstrated and voted upon at the conference.  Our submission is detailed below and in this demo video.

Laban Gestures for Expressive Keyboarding

Karen Tanenbaum, Josh Tanenbaum & Johnny Rodgers

Simon Fraser University-School of Interactive Arts + Technology

Keyboards tend to be discrete input devices, capable of multiple isolated interactions. The addition of pressure sensitivity adds a continuous dimension to keyboard input, adding additional information about the user’s actions. To take advantage of this, the role of the keyboard must be reimagined. Our goal was to design keyboard interactions that were expressive and emotional.

Theatre and dance use movement frameworks to understand gesture. Simplified versions of these choreographic systems provide us with a basic vocabulary of gestures for HCI. Specifically, we have adapted Laban Movement Analysis for keyboard gestures. We have adapted a simplified version of Laban’s framework, which breaks gestures down into 3 dichotomies: Sudden vs. Sustained; Light vs. Heavy; Indirect vs. Direct. Combining these elements produces 8 unique gestures.

UIST 2009 SIC: Laban Mappings

To situate our gestures in a mood space, we adopted Russell’s classic “circumplex” model of affect. Russell’s model constructs emotions along two axes: arousal and pleasure. Beedie et al. provide a basis for separating emotion from mood. Emotions are short term, while moods are sustained emotions.

circumplexmodel Print

By combining these two models, we have arrived at a framework for expressive gestures. This framework allows us to map responses to gestures in a variety of computational environments. We have designed two demos in two different domains to show the versatility of this approach.

Print

The first demo, in World of Warcraft, demonstrates how the gestural framework may be used to express emotions through character actions in a game or virtual world. The graphic at the top of this post shows the mapping between the framework and WOW character emotes. Below, you can see the avatar’s response after a “wringing” gesture, which is a sustained, heavy, indirect movement mapped to a gloomy, depressed state.

wow_demo

Our second demo, a responsive digital painting, demonstrates how the affective gestures may be opaquely mapped into aesthetic shifts in a piece of new-media art. The graphic at the top of this post shows the mapping between the framework and RGB color space. Speed and direction of the artwork’s rotation was also used. Below are two different states for the artwork. On the left, several heavy “punch” and “shove” gestures have been made, while on the right, several light “brush” and “dab” gestures have been made.

art_demo

We believe that this framework has applications wherever computation and emotion intersect. It could be implemented in instant messaging clients, or in social networks such as Facebook. It also has potential as a heuristic tool for gauging affect indirectly in user modeling applications.

So that’s the basic idea.  We’re hoping to write a short paper on it sometime soon.  A big thanks to all the people who came by our table and stuck through us as we explained all the crazy ideas we were trying to pull together with this little demo.  We really appreciated the enthusiastic responses we got from everyone!

DSC_4474