weiss2010d

weiss2010d

(Parte 1 de 4)

ITS 2010: Displays November 7-10, 2010, Saarbrucken, Germany

BendDesk: Dragging Across the Curve

Malte Weiss Simon Voelker Christine Sutter Jan Borchers

RWTH Aachen University 52056 Aachen, Germany {weiss, voelker, borchers}@cs.rwth-aachen.de christine.sutter@psych.rwth-aachen.de

We present BendDesk, a hybrid interactive desk system that combines a horizontal and a vertical interactive surface via a curve. The system provides seamless touch input across its entire area. We explain scalable algorithms that provide graphical output and multi-touch input on a curved surface. In three tasks we investigate the performance of dragging gestures across the curve, as well as the virtual aiming at targets. Our main findings are: 1) Dragging across a curve is significantly slower than on flat surfaces. 2) The smaller the entrance angle when dragging across the curve, the longer the average trajectory and the higher the variance of trajectories across users. 3) The curved shape of the system impairs virtual aiming at targets.

ACM Classification: H5.2 [Information interfaces and presentation]: User Interfaces. - Input Devices and Strategies.

General terms: Design, Human Factors

Keywords: Curved surface, desk environment, multi-touch, dragging, virtual aiming.

A typical computer workplace integrates horizontal and vertical surfaces into a workspace. It encompasses at least one or more vertical displays that show digital content and a larger horizontal area, containing input devices, such as mouse and keyboard, paper-based documents, and everyday objects. Touch recognition technologies have combined the benefits of traditional input metaphors with digital documents [24]. Tablets allow high precision stylus input for graphic design; digital pens, such as Anoto1, enable annotations on physical paper; and multi-touch gestures [4] provide an intuitive way to transform and modify digital data. However, despite all the advantages these interfaces have barely found their way into everyday workspaces yet.

1www.anoto.com

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. ITS’10, November 7-10, 2010, Saarbrucken, Germany.

Figure 1: BendDesk seamlessly merges a horizontal and a vertical interactive surface with a curve.

Many systems have been proposed that use vertical and horizontal interactive surfaces within a single desk environment (e.g., [7, 16]). They provide a large interactive area and allow to move digital objects across multiple displays. However, those systems suffer from a lack of spatial continuity. According to the Gestalt Law of Closure [5], gaps between adjacent displays suggest isolated interactive areas. Other laws may be violated that are useful in screen design, e.g., the Law of Proximity, because objects belonging together may be separated across the gap. Furthermore, splitting objects across bezels impairs search accuracy and tunnel steering performance [2]. Finally, those setups limit the applicability of direct manipulation, as movement trajectories are interrupted when dragging a finger or pen from screen to screen.

In this paper, we present BendDesk, a desk environment that merges a vertical and a horizontal multi-touch surface into one interactive surface using a curve (Figure 1). Our system provides a large interactive area within the user’s reach and allows uninterrupted, seamless dragging gestures across the entire surface. The focus of this paper is to explore the effects of a curve between two orthogonal surfaces on one of the most basic gestures: dragging. Our results can inform the design of more complex gestures, as most of these can be subdivided into elementary dragging and pointing operations.

ITS 2010: Displays November 7-10, 2010, Saarbrucken, Germany

(a) Placing of projectors and cameras.

board curve tabletop

(b) Interactive areas of the BendDesk system. (c) Manual screen calibration. Figure 2: Hardware setup and screen calibration.

Our project was inspired by the Sun “Starfire” video prototype from 1994 that intended to predict a potential future workplace in 2004 [23]. The envisioned system featured a large, interactive area, different input modalities such as gestures and direct manipulation, and applications, such as remote collaboration.

In recent years, the specific characteristics of horizontal and vertical interactive surfaces have received great interest in the research community. According to Morris et al. [18], horizontal surfaces are more appropriate for annotation and pen-based note-taking, while vertical displays support reading and intensive writing tasks using keyboards. Since no display seems appropriate for all potential tasks, Morris et al. propose a hybrid system. In a later paper [17], they report on a field study involving multiple horizontal and vertical screens. Although participants were enthusiastic about the extra space, one problem reported was that the horizontal and vertical screens were perceived as isolated areas. Some studies [17, 19] indicate that interactive surfaces should allow tilting to increase comfort, such as the FLUX table [15]. However, Morris et al. also emphasize that desk environments should fit into the ecologies of objects. For example, a table should allow users to put down everyday objects. This coincides with observations in a long-term study by Wigdor et al. [26]. The authors point out the “dual use” of interactive tabletops as computing devices and as pieces of furniture. In their study, the participant tended to tilt the table at an angle that avoided objects to fall from the table.

The combination of horizontal and vertical interactive surfaces has mostly been applied to two applications: collaborative workspaces and remote desks. While tabletops are suitable for face-to-face group work and provide awareness of each other’s actions, interactive boards can provide an overview of information shared among groups. Accordingly, many systems have been developed that integrate vertical and horizontal interactive surfaces into collaborative workspaces in order to add digital capabilities [8, 13, 21, 25]. The incorporation of both surface types has also been applied to remote desk environments. For example, the Agora system [16] and DigiTable [7] provide an interactive horizontal surface for a private document space and a vertical surface displaying a remote person via a video conferencing system. However, the vertical surface is non-interactive in most of these systems.

Nearly all multi-touch systems are limited to one or more flat interactive devices. One exception is Sphere [1], a spherical multi-touch enabled display. Furthermore, the field of organic interfaces [6, 1] proposes interactive non-planar surfaces that can be freely deformed. Early examples of this vision are Paper Windows [12] and Gummi [2]. Recently, Curve [28] presented ergonomics and design considerations for building a curved multi-touch table.

We envisioned BendDesk as a multi-touch desk environment that supports interaction with digital documents but also respects the nature of traditional desks. Although there is evidence that tilted surfaces yield high acceptance for specific tasks (see above), we intentionally avoided them for two reasons: Firstly, we consider the support of the ecology of (everyday) objects as crucial. With the exception of special purpose desks, such as drawing tables, office desks are usually horizontal because people put physical objects on them. In contrast, the possibilities of placing objects onto a tilted surface, even at small angles, are limited. Secondly, tilting the vertical surface backwards would reduce its reachability at the top.

We also accounted for ergonomic requirements: the user should be able to sit in a comfortable position and to reach the entire input area without much effort. We applied ISO norm 9241-5 to choose the height of the table. Furthermore, we conducted preliminary user tests on an adjustable table prototype to find the depth for the vertical surface. In these tests, users perform pointing and dragging tasks where the depth of the vertical surface was varied.

As illustrated in Figure 2, our interactive desk consists of one 104 cm × 104 cm acrylic surface that is bent to yield two orthogonal surfaces, seamlessly merged by a curve. The surface is mounted at 72 cm height2, on a half-closed wooden box that contains all electronics, such as projectors for graphical output and cameras for touch input. The form factor of our setup separates the device into three interactive areas: the vertical board (100 cm × 43 cm), the curve (100 cm × 16 cm) with a radius of 10 cm, and the horizontal tabletop (100 cm × 40 cm). We choose a radius of 10 cm to provide a large

2following ISO 9241-5

ITS 2010: Displays November 7-10, 2010, Saarbrucken, Germany planar interactive surface while allowing a comfortable dragging through the curve. Furthermore, we added a raised noninteractive strip in front of the board that fixes the acrylic. As a side effect, this provides an area for the user to rest her hands.

Two short-throw projectors behind the surface show the graphical user interface (GUI) on a Dura-Lar diffusor, each operating with a resolution of 1024 × 768 pixels. An Optoma EX525ST projector displays the GUI on the board, while a NEC WT615 projector shows the interface on the curve and the tabletop. Since the latter employs aspheric mirrors to project the graphics in a flat frustrum, the user can sit close to the table without occluding the projection.

We use Frustrated Total Internal Reflection (FTIR) [10] to detect touches on the surface. The acrylic is surrounded by a closed strip of 312 LEDs with a spacing of 1.2 cm that feed infrared (IR) light into the surface. Furthermore, we apply a thin silicone compliant layer between the acrylic and the diffusor. Three Point Grey FireFly MV cameras with attached IR filters track touches on the surface, each running at 60 fps and a resolution of 640 × 480 pixels.

VISUAL OUTPUT Our framework provides a square 1024 × 1024 pixel GUI that maps isomorphically to the interactive area. The output resolution is approximately 26 dots per inch (DPI). However, the DPI can be increased by using projectors with a higher resolution or more projectors at shorter distances. The bottom left pixel (0, 0) corresponds to the front left corner of the tabletop and the pixel (1023,1023) maps the top right corner of the board. We define the upwards direction on the table as a vector with a positive y-coordinate in GUI coordinates. The downwards direction is defined analogously.

Since our system involves a non-planar surface, we must compensate for substantial distortions when projecting the user interface. Hence, we render the entire GUI into an offscreen buffer first. Subsequently, each projector displays a part of this buffer on a bicubic spline patch that compensates the respective distortion. Special care is required to position each projector such that its projected target area is placed completely in its depth of field.

Projector calibration We employ a manual calibration process to compute the spline patches for each projector. A paper calibration sheet with an imprinted uniform grid of 32×32 dots is placed onto the interactive area. Accordingly, each printed dot with index (x,y) ∈ {0,1,...,31}2 on the sheet maps to a pixel position P(x,y) in the GUI space:

The result of a successful calibration process is a projected dot pattern that exactly matches the nodes on the paper grid. That is, where Di(x,y) is the mapping of projected grid dots to GUI coordinates for each projector i ∈ {1,2}, defined analo- gously to P(x,y). The function frustrumi(x,y) indicates whether the paper grid point (x,y) is inside the frustrum of projector i or not:

0 otherwise

Each projector is calibrated separately. When starting the calibration for projector i, it displays a 32 × 32 uniform grid that covers the entire screen space of the projector. Hence, each projected grid point is shown at a certain screen coordi- nate Si(x,y) with

Thereafter, the user deselects all grid rows and columns that do no map to rows in the calibration sheet (defines frustrumi(x, y)). In our case, this means that she deselects the bottom 18 rows for the top projector and the top 16 rows for the bottom projector. Then the user moves the projected grid dots until they fit with the corresponding points in the paper sheet (Di(x,y) = P(x,y)). We implemented a set of transform tools to speed up this manual process.

Finally, when the user confirms the calibration, a sub-grid is extracted that contains all grid dots inside the frustrum of the projector (frustrumi(x,y) = 1). The corresponding screen coordinates Si(x, y) then represent the interpolation points of the bicubic spline patch, whereas the values Di(x,y) are used as texture coordinates to render this part of the inter- face on the table. This technique easily scales up to setups with more than two projectors, while the process has to be performed only once for each configuration. Figure 2(c) illustrates the manual screen calibration.

Rendering pipeline

When launching a BendDesk application, our software framework first creates a 1024×1024 GUI texture. It then reads the spline patch for each projector and extracts a high resolution quad patch with texture coordinates that map into the GUI texture space. As the geometry is static, it can be rendered efficiently, e.g., by using vertex buffers. In each frame, the GUI is rendered into a texture first and then distributed to the projectors, which output the texture on the respective spline patches using the coordinates from the calibration process. We hide this pipeline in the background, i.e., the application designer addresses the GUI coordinate space, without having to pay attention to calibration issues or projector setups.

Our camera setup detects touches on the entire interactive surface, with each camera covering a specific area and sensing FTIR spots independently. We employed a simple detection algorithm based on a connected component analysis after background subtraction. After detecting spots for all cameras, their coordinates are transformed from camera to GUI coordinates. Similar to the screen calibration, we use a bicubic spline patch for this mapping, as described below. Finally, the transformed spots are sent to the application as touch events in GUI coordinates. Note that all camera fields of vision overlap to ensure continuous tracking between the areas. If multiple spots are mapped to nearly the same GUI

ITS 2010: Displays November 7-10, 2010, Saarbrucken, Germany

Figure 3: Extraction of spline patch to map from GUI to camera space. Left: Largest rectangle containing visible dots is extracted. Right: Corresponding area on table.

position, they are merged into a single touch event by averaging their coordinates.

A predictive tracking algorithm ensures the registration of touch events between successive frames, even if the user quickly changes speed or direction of a finger on the surface. That is, for each touch T at position p, we track its velocity and acceleration and extrapolate p to its anticipated position p′ in the subsequent frame. If there is a touch close to p′ in the next frame, we assume that it is a translated version of T. In practice, the use of predictive tracking strongly improves the touch registration on our system and reliably avoids that users “lose” dragged or transformed objects.

Camera calibration For each camera j ∈ {1,2,3}, our calibration process creates a mapping from camera coordinates to global GUI coordinates. When starting the calibration, our software displays an N × M uniform grid with GUI coordinates G(x,y) that covers the interactive surface. Note that this requires a correct projector calibration.

In the first step, the calibration creates a mapping Cj from GUI grid point indices to camera pixels:

where N and M denote the grid resolution. The calibration intends to find the camera pixels that match the GUI grid

0 otherwise.

All cameras are calibrated at the same time. The system successively highlights each grid point. For each highlighted dot (x, y), the user touches the surface at that position and then confirms with a button click on a wireless control. Now, our algorithm stores which cameras detected the resulting FTIR

As illustrated in Figure 3, this manual process yields a visi- bility map, visiblej, for each camera. We extract the largest rectangle that only contains visible spots by solving the Max- imum Empty Rectangle problem [20]. Similar to the screen calibration, the extracted point indices together with G(x, y) and Cj(x, y) represent the interpolation points for a bicubic spline patch P that maps from GUI to camera coordinates for

(Parte 1 de 4)

Comentários