Multitouch Exhibit Design 1: Interaction and Feedback

With the development of our first interactive exhibits, and a few rounds of informal user testing, we’ve begun to explore approaches in multitouch and multiuser design. We’ve created both a multitouch mashup that uses Flickr and Yahoo! Maps, and a panoramic viewing application that allows visitors to access detailed photographs from points found on the larger image.

We developed these applications for our multitouch table (MT2) and for the HP TouchSmart platform. After developing touch screen exhibits for nearly a decade, the differences between standard touch and multitouch are very much in focus.

From the beginning, it has been clear that mouse or even standard touch-screen conventions wouldn’t be completely applicable. Multitouch and multiuser design requires new thinking, more experimentation, and careful user-study.  I want to share some of what we’ve learned and the areas that we are still investigating.  I’m also doing this in preparation for a workshop that we’ll be conducting at Museums and the Web (called “Make it Multitouch”) and a short presentation for the Canadian Museum Association’s annual meeting (called “Doers and Dreamers“) in Toronto at the end of March.

This discussion has been divided into three blog posts: The first explores user interaction and feedback, the second focuses on User Interface (UI) elements, objects and environments, while the third looks more broadly at how multitouch and multiuser exhibits can shape the visitor experience.

Interactions
How do users interact with interface elements and content on a multitouch screen or surface? And how are these interactions different than those we observe in standard mouse-driven or touch-screen applications? Below is a list of some of the unique ways visitors can interact with a multitouch interface. As you’ll see, some are very natural and others are more obscure. It is a strange blend of intuitive gestures and secret handshakes.

Touch. The same as standard touch screen interactions, touch areas are made larger to accommodate a finger tip  than those for mouse or trackball driven kiosks and exhibits.

Drag. With either one finger or multiple points, this type of interaction is similar to what we see with a mouse and pointer.

Pinch & Expand. This is an intuitive way to increase or decrease the size of objects in multitouch environments. In one case, we saw that just the act of placing a hand on the table surface slightly expanded an object (the hand opened a bit more as it impacted the surface). This allowed the visitor to immediately understand how to size the object. Pinch & Expand is common in ordinary hand gestures when talking about how big or small something is.

Rotate. As a visitor drags or pinches and expands an object it becomes apparent whether it can be rotated or not. Since multitouch tables have multiple points of approach, most applications provide visitors with the ability to rotate objects.

Double-Tap. We’ve used this type of interaction in a mapping mashup to “call over” a floating universal navigation element. We found this helpful for our large table, where the floating navigational item could be out of reach. However, our testing showed that this was not as an intuitive as some of the other types interaction. Although, once observed, most visitors found it simple and helpful.

kids-play-gravitor

Draw. Some multitouch applications allow visitors to draw shapes, such a NUI Gravitor application (seen above). It is also possible to draw “commands.” For example, you could draw an “x” on an object to close it. This would assume, however, that the object could not be dragged or resized, since those interactions would be interfere with the ability to draw.

Flip or Flick. It allows visitors to quickly browse through “stacks” of photographs or other fixed size objects. This works well with “dual touch” technologies like the iPhone and the HP TouchSmart.

Feedback
Visitors can benefit from additional feedback as they interact with multitouch applications. Occasionally, there can be a lag in direct feedback for some of the interactions listed earlier. This can be especially true in multiuser environments where the application is trying to process dozens of simultaneous points.

touch-cezanne

Tracers/Trails/Auras. As each finger point is detected as a “blob” by the “touch core” software, a small graphic or animation follows the point across the surface or screen. You can see a tracer (above) following the visitor’s finger as he resizes the painting. (His finger is slightly off the table so you can clearly see the “tracer.”)

Highlights and Ghosting. As visitors touch an object, it can be made to highlight or animate in some way. Ghosting can be helpful for dragging as you can still see where the item originated. Highlights provide the user with instantaneous feedback and reinforcement of their current action.

Connections. Lines (or other indicators) that connect objects can be helpful in way-finding particularly in multiuser environments. For our multitouch mapping application, we created connection lines from photographs to their points on the map—knowing that one user may be manipulating a photograph while another is controlling the map. This allows a visitor to trace the connection line back to the geographical point where the photograph was taken.

In my next post, I’ll explore how these interactions are applied to User Interface (UI) elements, objects and environments. As you’ll see, things get really interesting when we look to adapt and invent new ways for visitors to interact.

Back To Blog

Recent Posts

Image for the post: 'Building an Interactive Video Wall'

Building an Interactive Video Wall

Our most ambitious technical project of 2016 was the DinoStomp 3D interactive video wall that we developed with the Fort Worth Museum of Science and History.  The DinoStomp exhibit consists of a video wall 8’ high and…