Found in space

The MINDful Play Environment is born

Dene Grigar & Steve Gibson 28 October 2007

Playing around: a new platform for learning in the sci/art/humanities sphere

Students no longer experience optimal learning when they are only expected to sit, listen and converse

The MINDful Play Environment (MPE), an acronym for Motion-tracking, Interactive Deliberation, is a virtual learning environment created as a performance-installation piece, driven by motion-tracking technology, in which three people interact with one another and media elements like video, animation, music, lights, and spoken word. The artists and programmers involved in this project – Dene Grigar, Steve Gibson, Justin Love, and Jeannette Altman – have produced this educational environment so that it does not look or feel educational but rather game-like in a way that is both playful and mindful. Here, we provide an overview of the project, talk about the Phase I stage just completed, and describe the future steps planned for its development and use.

Project Overview

This project was born out previous projects that we worked on both separately and together. Credit for creating the concept goes to Steve, who has been pioneering motion tracking performance and installations for close to a decade. His music and light piece, Virtual DJ, is the foundation upon which MPE is built. After having met in Nottingham, UK at trAce’s Incubation conference in 2004, we began experimenting with narrative structure and motion tracking technology, expanding the media elements from music and light to include video, still images, animation and spoken word. When Ghosts Will Die was the piece that came out of this collaboration.

Fig. 1. Networked performance of Virtual DJ

Both Virtual DJ and Ghosts were made with gaming in mind. In a performance of Virtual DJ, for example, we, or just one of us, play the game. As an installation, the audience joins in play. In terms of game qualities, both works offer levels of play that grow increasingly more challenging as users gain expertise. Virtual DJ offers five game levels in which the user moves from playing a few notes of music to a full-on musical composition. Both Virtual DJ and Ghosts are immersive environments that require, as Jesper Juul says, users to “engage in pretense play” and “map the [user] into the game world” that we created (Juul, 2004).

Ghosts forces the user to confront the decision to drop bombs on Hiroshima and Nagasaki by taking the user through the steps of the development of nuclear proliferation and ending with bodies of the dead projected directly onto the user’s body. Both works also require user agency in that users “influence the game state” (Juul, 2004) every step of the way. In each work, if the user does not move in the space, absolutely nothing happens in the game. And finally, both works offer highly interactive experiences where the users’ kinesthetic involvement plays a significant role and enactment emerges as what Simon Penny describes as a “powerful technique” for having some effect on the actions of observer (Penny, 2004). Essentially, it is this last quality that we wanted to focus on the most in the development of MPE since one of the main successes of Virtual DJ and Ghosts lay in their connection to mindful yet highly physical activity. To put it simply, both works are aerobic in nature – Virtual DJ inspires dancing and Ghosts, moving briskly around a rather large space – and this movement results in “literal” outcomes that impact intellectual and emotion engagement. Ghosts, in particular, asks users to seriously consider their actions leading up to the dissemination of the atomic bomb. Those who have experienced the work have attested to a shift in consciousness about and mindful contemplation of the repercussions of the use of atomic weapons to resolve conflicts.

Fig. 2. Gibson performing When Ghosts Will Die

The combination of physical movement and mindful contemplation lies at the heart of MPE. The project goal is to investigate if media-rich, interactive environments that encourage kinesthesia can be utilized effectively for learning, particularly high-level math and science and language skills – and we are trying to accomplish this task by not making MPE an educational space and more a game space, much like Dance Dance Revolution and Wii. In terms of math and science skills, we are interested in learning if users can better understand 3D triangulation and spatial mapping through their experience in MPE; for language, we are interested in learning if users can improve word choice, organization and ideas/content through their experience in MPE. It is our premise that by enacting the coordinates and space, users will embody the knowledge and so learn these concepts through muscle memory and Penny’s notion of “[un]conscious decision making” (Penny, 2004).

We also subscribe to Francisco Varela and colleagues’ concept of "mindfulness," or what they call the “embodied everyday experience” whereby “the mind [is led] back from its theories and preoccupations, back from the abstract attitude, to the situation of one’s experience itself.” Cognition, from this perspective, is inextricably linked to “embodied action[,]...“the kinds of experience that come from having a body with various sensorimotor capacities” as well as the way “individual sensorimotor capacities are themselves embedded in a more encompassing biological, psychological, and cultural context” (Varela et al., 1993). Citing the way an athlete or musician pulls together mind and body into focused action, Varela and his collaborators suggest that the practice of mindfulness does not take the person out of the body but rather places attention on the entire aspect of one’s “presence” in order to reconnect the person to “their very experience” of living. Connecting Varela’s and Penny’s ideas to current educational theories generating from scholars like Spinks, Sprenger, and Swanson, we arrive at the premise that embodied action leads to embodied knowledge because “the components of our brains that manage thought processes work better with movement” (Spinks, 2002; Sprenger, 2003; Swanson, 1995, in Grigar et al., 2007). And we are hoping to achieve this outcome without creating an environment that is obviously educational.

How does it work? As we mentioned earlier, the project calls for the use of motion-tracking technology. To this end, we are using a proprietary system called the Gesture and Media System (GAMS), created by Will Bauer of APR, Inc. of Edmonton, Canada. The system organizes the space in a 3D grid. Media elements – light, music, spoken word, video and animations – are programmed in zones and points on the grid and respond to hand-held tracking devices, or "trackers," much in the same way that a page is evoked when a cursor, driven by a mouse, touches a hyperlink on a webpage.

Fig. 3. Computer interface showing zones and points on the 3D spatial grid

The set-up of the room using this technology requires four infrared cameras mounted in each corner and pointed down to the performance space. These cameras are linked to one another with the aid of a Firewire connected to a PC that runs the propriety software, called Flashtrak, controlling three Martin 250 Entour robotic lights and the Martin fog machine. The PC, in turn, is connected to a Mac sitting beside it that houses the software – Ableton Live and Modul8 mentioned previously – driving the MIDI information that makes up the media. This setup means that when a player moves in the space, her or his motion is tracked by the cameras and sent to the PC which, in turn, evokes the light programmed in that particular space as well as sends the data to the Mac so that it can trigger the music and video.

Fig. 4. Structure of the MINDful Play Environment

For the project to be successful, we need ways of assessing the effectiveness of the environment for learning. To this end, we are creating two test sites. The first, an exhibit called VJDJ, is located at the Oregon Museum of Science and Industry (or “OMSI”) and will test math and science concepts. The second, called Rhapsody Room is located at Dene’s lab at Washington State University Vancouver (the MOVE Lab) and will test language concepts. The educational administrators at OMSI are bringing in students from the Oregon schools and testing them qualitatively and quantitatively. To bring in students from schools local to Dene’s lab at WSUV, we are partnering with the Vancouver School District. We are also partnering with a literacy and assessment expert, Michael Dunn, from the education program at WSUV and another specializing in data collection and analysis, Michael Raisinghani from Texas Woman’s University, to handle the assessment for the team.

As far as the timeline, the completion of the project is set for January 2009. Phase I, conceptualizing and programming MPE, was just completed. We have now moved to Phase II, which sees the development of OMSI site media (video, sound, music, animation) and beta-testing the environment with media intact. Phase III involves the production of MOVE Lab media content and collateral material for both sites; Phase IV will involve testing, and Phase V, analysis and reporting.

Phase 1: what have we done so far?

The conceptualization and programming stage of the project, as we’ve mentioned, is now complete. Figure 5 shows the layout of the environment at the OMSI site and the media assignment for that site. The four cameras (not shown) are mounted in the corners. You see the three robotic lights arranged in different places in the space, and the three trackers that players will use to interact in the space. You can also see directions for how each tracker behaves.

Fig. 5. Map of MPE Media Assignment

Playing in the environment involves each user taking possession of a tracker. Each of the three trackers controls one channel of music in the DJ software Ableton Live and a channel of video in the VJ software Modul8. This means that Player One holding tracker 1 can control the video playing on one wall and the drum sound. Player Two, holding tracker 2, can control video on a second wall and the bass sound. Player Three, holding tracker 3, can control video on a third wall and the melody. As you can see, all three players control their own light source. In reality, up to four players can interact with one another in this environment. Other media like animation and the written word can also be produced in this environment and controlled by various players.

Fig. 6. Three players interacting in MPE

As players interact with one another, they are able to change the media. For example, when Player One moves front and back (y-plane) in the space, she changes drum sounds, video clips and light colors, and when she moves up and down (z-plane), she changes drum volume, video clip opacity and the light dimmer. At the floor all three of these values are set to minimum, which means that when she raises her hand to 100 cm, she is able to fade each of these three values to maximum. Her side-to-side movement (x-plane) changes low-pass filter and delay in the drums.

Fig. 7. Players changing media with their movements in MPE

As we’ve mentioned, the development phase included conceptualizing the environment and programming the behaviors that occur in it. We began working on the concept of MPE in the autumn of 2006 with the idea that it would be built on the Virtual DJ engine and run this autumn at OMSI. As we worked out the details and thought through our theory about kinesthesic learning over the spring and summer, we began to see the project as an intricate one needing more time. The commercial introduction of Wii last winter provided the impetus to make the environment more complex and immersive. What we ended up with is an environment that offers the three players three levels of play with control over up to six videos, sound and light, which can be combined with one another in a variety of ways and expanded exponentially depending on how players interact with one another.

For example, if Players Two and Three move toward one another, the proximity of one tracker to another affects the audio and video. So if Player Two approaches within 1.5 meters of Player Three, the bass will increase its distortion level, and the current video controlled by the melody will also appear in the bass video screen, blended with the bass video. All three trackers are programmed to respond to one another similarly. In this way, MPE is intended to encourage collaborative learning through kinesthetic play.

Fig. 8. Three players collaborating in MPE

The physical embodiment of x, y, z coordinates and spatial mapping combined with the potentiality of textual representation of data along with media focusing on sound, images, and light make MPE a unique and robust site, we think, for learning concepts relating to both math and science as well as language.

The complexity of the programming is best demonstrated in the structural maps that describe the players’ relation to the media and the parameters of the Flashtrack MIDI to Modul8 and Ableton Live. Figure 9 shows the way in which Tracker One handles the drums. All aspects of the environment has been mapped out to show unique behaviors of each tracker and the way in which trackers relate to one another.

Along with maps of the space, we have conceptualized the various MIDI mappings for both Flashtrack MIDI to Modul8 and to Ableton Live. A representation of that data is shown in Figure 10, which depicts the way the three trackers relate to the audio.

Fig. 9. Map of the Drum Tracker

Future Steps

While motion-tracking technology has seen wide use for physical therapy, surveillance, and entertainment, it has yet to be utilized in education. Additionally, the incorporation of movement in a classroom environment has not yet been fully realized outside of disciplines like dance and physical education where physical activity is seen as a necessary component of the discipline. Furthermore, print is still the media of choice in most classroom settings. As we’ve written previously (Grigar et al., 2007):

[This situation lies in contrast to] our everyday experiences where print, radio, and television have been replaced by video computers, cell phones, and iPods as preferred communication devices. Joysticks, gameboy interfaces, and IMing offer highly physical and dynamic interactions with information. The growing popularity of games like Nintendo’s Wii and DDRGaming’s Dance Dance Revolution means that young people have not only become accustomed to media-rich environments made possible by multimedia technologies but also those that offer kinesthetic and kinetic opportunities.

In working on documentation for the assessment portion of the project, our colleague Michael Dunn explained (Dunn, 2007):

[Research shows that] informational technologies have, indeed, impacted how learning occurs. Auditory learners now make up the smallest percentage of learners in schools (Tileston, 2004). [I mentioned earlier that] research indicates that the components of our brains that manage thought processes work better with movement (Spinks, 2002; Sprenger, 2003; Swanson, 1995). It stands to reason that students no longer experience optimal learning when they are only expected to sit, listen and converse. Rather, they need different formats of classroom instruction to help facilitate learning.

We and the other collaborators on this project anticipate that the media-rich and kinesthetic environment of the MINDful Play Environment will be highly conducive to the process of learning. Therefore, if after assessing the project we show empirical evidence supporting our premise that the environment is effective, then future plans for MPE include its implementation, licensing MPE at various educational facilities.

Fig. 10. Audio relational map for MPE

In conclusion, the research undertaken in this project will provide the opportunity to develop and test a classroom of the future that utilizes technologies and sensory modalities that can potentially change the way we teach and impact students’ success with learning. It also has the potential of altering current views of education that compartmentalize rather than combine art/performance, science/math, and the humanities for teaching higher level thinking skills.


Dunn, M (2007). “Grant Draft.” Personal Correspondence. 13 September. Gibson, S (2004). Virtual DJ. Online.

Gibson, S and D Grigar (2005). When Ghosts Will Die. Online.

Grigar, D, Dunn, M, Gibson S and M Raisinghani (2007). “MINDful Play Environment: A Classroom of the Future.” Online.

Juul, J (2004). “Introduction to Game Time.” First Person: New Media as Story, Performance and Game. Ed: Waldrip-Fruin N and P Harrigan. Cambridge, MA: The MIT Press. 131-142.

Penny, S (2004). “Representation, Enaction, and the Ethics of Simulation.” First Person: New Media as Story, Performance and Game. Ed: Waldrip-Fruin N and P Harrigan. Cambridge, MA: The MIT Press. 73-84.

Spinks, D (2002). Frontline: Inside the teenage brain [Television broadcast]. Boston: Public Broadcasting Service.

Sprenger, M (2003). Differentiation through learning styles and memory. Thousand Oaks: Corwin Press, Inc.

Swanson, LJ (1995). Learning styles: A review of the literature. ERIC Document No. Ed 387 067.

Tileston, DW (2004). What every teacher should know about learning, memory, and the brain. Thousand Oaks: Sage Publications.

Varela, F, Thompson, E and E Rosch (1993). The Embodied Mind: Cognitive Science and Human Experience. Cambridge, MA: The MIT Press.

Related information

You can read more about this project and view video footage of some of the interactions on the project’s official website.