Including multi-stroke gesture-based interaction in user interfaces using a model-driven method


Abstract:

Technological advances in touch-based devices now allow users to interact with information systems in new ways, being gesture-based interaction a popular new kid on the block. Many daily tasks can be performed on mobile devices and desktop computers by applying multi-stroke gestures. Scaling up this type of interaction to bigger information systems and software tools entails difficulties, such as the fact that gesture definitions are platform-specific and this interaction is often hard-coded in the source code and hinders their analysis, validation and reuse. In an attempt to solve this problem, we here propose gestUI, a model-driven approach to the multi-stroke gesture-based user interface development. This system allows modelling gestures, automatically generating gesture catalogues for different gesture-recognition platforms, and user-testing the gestures. A model transformation automatically generates the user interface components that support this type of interaction for desktop applications (further transformations are under development). We applied our proposal to two cases: a form-based information system and a CASE tool. We include details of the underlying software technology in order to pave the way for other research endeavours in this area.

Año de publicación:

2015

Keywords:

  • model-driven engineering
  • Gesture-based interaction
  • User interface
  • Customised gesture

Fuente:

rraaerraae
googlegoogle
scopusscopus

Tipo de documento:

Conference Object

Estado:

Acceso restringido

Áreas de conocimiento:

  • Ingeniería de software
  • Interfaz de usuario
  • Ciencias de la computación

Áreas temáticas:

  • Métodos informáticos especiales
  • Programación informática, programas, datos, seguridad
  • Funcionamiento de bibliotecas y archivos