Multitouch surface at the Two Oceans Aquarium in Cape Town
In 2008 our company designed a 100” rear-projected multitouch wall for the Two Oceans Aquarium in Cape Town (Fig 1).
Fig 1-2: Frog wall multitouch screen
The goal of the exhibit was to create an immersive interactive landscape to encourage visitors of the aquarium to inquire and learn about habitats, calls and threats of local frog species.
Users find themselves looking at a photo-realistic animated landscape. By touching various hotspots they activate one of the habitats, revealing a 360 Deg panoramic view of a typical area. Now, the
with information on the selected attraction (Fig. 3, 4).
Fig. 3,4. Interactive wall for Cape Town Tourism Visitor Centre
Limitations of the interaction
At the time we used infrared cameras for blob detection not only for multitouch displays, but also for interactive floor and wall projections (Microsoft Kinect had not been released yet). The overhead 2D-tracking of user activity on the floor only provided x and y positions as well as blob size and grouping as a means of user input. Since the floor in front of the wall is also a passage area in the centre we had to differentiate between intended
Fig. 5. iPoint tracker by HHI Fraunhofer, Fig 6. Problems with the gesture application Lessons learned.
Challenges with the conceptual model of the interaction
Although pointing surely is one of the most archaic gestures as it can already be observed in small infants, users presented with a screen and the invitation to point at it behave all but “natural”. Even when instructed to point at the screen “naturally” many users bent their arms in anticipation of triggering a sensor in the black box above or underneath them.
A similar solution is featured in Microsoft Kinect games.
Fig. 7. iPoint “Don’t click” GUI, Fig 8. Sample of final GUI solution
Even with the time-delayed activation functionality, we still faced ergonomic problems. It was uncomfortable and tiring to navigate the interface even if just for a few minutes at a time. We realised that we had spent a lot of time thinking about trigger mechanisms, but had not yet looked closely at the specific ergonomics of pointing with hand and arm. After a few tests we found that although user managed to navigate traditional GUI’s with
states added to the experience of control and ease, whilst the trigger areas or buttons needed to be separated from the active areas, so the trigger timer would only be activated if users intended to do so.
Fig. 9., Fig 10., Fig 11. Evolution of the point GUI
The Virtual Chemistry Lab Table (VCLT)
A few years ago Formula D interactive started the development of the Virtual Chemistry Lab Table (VCLT), a tool to help learners understand the basics of Chemistry through hands-on interaction. Once connected to a standard computer and screen, and the software is installed, the Virtual Chemistry Lab Desk allows learners
to arrange physical objects on a surface in order control the software (Fig 12).
Fig.12. VCLT GUI , Fig 13. Desktop console concept render
Learners then explore the digitally simulated experiments using an array of tools similar to the ones in a real chemistry lab by adding other objects in proximity to each other. A simple content management system expands the functionality of the lab from a simulator to a documentation and presentation tool. Here, learners embed their own knowledge or test results (from real lab experience or secondary research) and embed content, such as images, video or text within the application.
Now, they can share and discuss their findings with peers in a classroom setting. The VCLT is built on top of the reactivision platform (http://reactivision.sourceforge.net/) with custom built hardware and content development system.
Fig.14. VCLT CMS, Fig 15. User testing at Wynberg Girl’s Junior School
A virtual chemistry lab is a great application for tangible interaction through refer-ence objects. Users can easily transfer the mental model from a real chemistry lab to the simulated one. The interactions are similar yet simplified.