You are hereSMALLab

SMALLab


An Embodied Learning Environment

  • Embodied: Full body, kinesthetic interactivity
  • Multimodal: See, hear, and physically feel the experience
  • Collaborative: Face-to-face teaching and learning with digital media

SMALLab (Situated Multimedia Arts Learning Lab) is an embodied learning environment developed by a collaborative team of researchers and K-12 teachers associated with Arizona State University. This innovative environment allows the student’s physical body to function as an interface for learning. Within SMALLab, students use a set of physical tracked objects and wireless peripherals to interact in real time with dynamic visual, physical and sonic media via 3D movements and gestures. SMALLab’s motion capture system senses the position of the tracked objects and our custom software routes the information for pattern analyses of student performance and real-time feedback.

For example, in the Spring Pendulum scenario, the students are immersed in a complex physics simulation that involves multiple sensory of modality inputs. They can hear the sound of a spring picking up speed, see projected pendulum spheres moving across the floor, feel a physical ball in their own hands, and then integrate how the virtual ball moves in accordance with their body movements. This aids students in constructing conceptual models of the content. Our embodied environment has been demonstrated to be appropriate for all content areas, including Science, Technology, Engineering and Math (STEM), Language and Literacy, and the Arts.

The Need

Our schools need transformational change. Students’ daily experiences are infused with interactive, digital technology, yet schools have been slow to adapt. Over the past decade a number of Ed Tech products including desktop simulations and online learning environments have had a powerful impact on teaching and learning. However, the next revolution in media computing will be at the level of the interface, as is already apparent in recent products such as interactive whiteboards, not to mention the Nintendo DS and Wii. New advances in simulations and digital gaming are driving a demand for new modes of interaction that transcend desktop and console computing paradigms. At the same time, emerging research from the Learning Sciences points to the fact that students – particularly those who are struggling to excel with traditional approaches - can greatly benefit from learning experiences that are embodied, collaborative, and multimodal. Today, 3D immersive environments address this pressing need, and they are finally affordable and robust, ready for wide dissemination.

SMALLab is a student-centered learning environment that can improve learning and motivate students by opening new pathways to learning. It is a highly social environment that can facilitate rich communication among students and educators alike. SMALLab can nurture creativity and self-expression through learning that is kinesthetic and multimodal.

Central to our work is the development of the Situated Multimedia Arts Learning Lab [SMALLab]. SMALLab is an environment developed by a collaborative team of media researchers from education, psychology, interactive media, computer science, and the arts. SMALLab is an extensible platform for semi-immersive, mixed-reality learning. By semi-immersive, we mean that the mediated space of SMALLab is physically open on all sides to the larger environment. Participants can freely enter and exit the space without the need for wearing specialized display or sensing devices such as head-mounted displays (HMD) or motion capture markers. Participants seated or standing around SMALLab can see and hear the dynamic media, and they can directly communicate with their peers that are interacting in the space. As such, the semi-immersive framework establishes a porous relationship between SMALLab and the larger physical learning environment. By mixed-reality, we mean that there is an integration of physical manipulation objects, 3D physical gestures, and digitally mediated components. By extensible, we mean that researchers, teachers, and students can create new learning scenarios in SMALLab using a set of custom designed authoring tools and programming interfaces.

SMALLab supports situated and embodied learning by empowering the physical body to function as an expressive interface. Within SMALLab, students use a set of “glowballs” and peripherals to interact in real time with each other and with dynamic visual, textual, physical and sonic media through full body 3D movements and gestures. For example, working in the Spring Sling scenario, students are immersed in a complex physics simulation that involves multiple sensory inputs to engage student attention. They can hear the sound of a spring picking up speed, see projected bodies moving across the floor, feel a physical ball in their own hands and integrate how the projected ball moves in accordance with their own body movements to construct a robust conceptual model of the entire system.

Physically, SMALLab, is a 15’W x 15’W x 12’H freestanding, interactive space. A cube-shaped trussing structure frames its open architecture and supports the following sensing and feedback equipment: a six-element camera array for object tracking, a top-mounted video projector providing real time visual feedback, four audio speakers for surround sound feedback, and an array of tracked physical objects (glowballs). A networked computing cluster with custom software drives the interactive system. SMALLab also provides an embedded set of high level authoring tools that allow students and teachers to create their own interactive learning scenarios. SMALLab is a scalable architecture that is designed to address the real world financial and logistical constraints of today’s classrooms and community centers. In past work our team has deployed SMALLab in a series of pilot programs that have reached over 25,000 learners through regional school and museum programs.

Sotware Architecture

Interrelated software modules drive the interactive system. This diagram illustrates the bi-directional data flow between each system, and each component is briefly described below.

 

Visual Feedback

A top-mounted video projector displays interactive visual content on the floor of SMALLab. In contrast to related work CAVE environments, we have sought to develop an architecture that promotes social interaction and collaboration among groups of students. The absence of projection screens surrounding the space subverts many biases of screen-based media, and creates an open physical environment.

Our feedback frameworks utilize still images and video clips that are collected and annotated by the students. In addition, we have developed a three-dimensional graphics engine using OpenGL. Interactive graphics modules are coupled with specialized learning exercises to assist in the development of students’ understanding of spatial relationships, movement dynamics, and activity patterns.

Multimodal Sensing

Groups of students and educators interact in SMALLab together through the manipulation of up to five illuminated glowball objects and a set of standard HID devices including wireless gamepads, Wii Remotes, and commercial wireless pointer/clicker devices. The vision-based tracking system senses the real time 3D position of these glowballs at a rate of 50 - 60 frames per second using robust multi-view techniques. To address interference from visual projection, each object is partially coated with a tape that reflects infrared light. Reflections from this tape can be picked up by the infrared cameras, while the visual projection cannot. Object position data is routed to custom software modules that perform various real-time pattern analyses on this data, and in response, generate real time interactive sound and visual transformations in the space. With this simple framework we have developed an extensible suite of interactive learning scenarios and curricula that integrate the arts, sciences and engineering education.

Sound Feedback

Sound plays a critical role in SMALLab and many of our learning exercises depend on the immersive, three-dimensional nature of sound. Four raised speakers and one subwoofer surround the space. We have developed software to project spatialized, reactive sound into the space. An extensible database of soundfiles supports this module, and through in-classroom and web based interfaces, both students and teachers can contribute sound content.

Our current work in this area extends prior research in the development of interactive installations, and borrows techniques from musique concréte and concatenative music composition. We have designed specialized learning modules that allow students to record their own sounds from the environment and then discuss, share, and interact with those sounds in SMALLab. During classroom activities and via the Edulink website, these collected sounds can be annotated and auditioned by students and teachers. These annotations inform our models for interaction and allow for the delivery of sonic feedback that is adaptable to individuals and groups of students.

Integrated Authoring Tools

We have developed an integrated authoring environment, the SMALLab Core for Realizing Experiential Media [SCREM]. SCREM is a high-level object oriented framework that is at the center of interaction design and multimodal feedback in SMALLab. It provides a suite of graphical user interfaces to either create new learning scenarios or modify existing frameworks. It provides integrated tools for adding, annotating, and linking content in the SMALLab Media Content database. It facilitates rapid prototyping of learning scenarios, enables multiple entry points for the creation of scenarios, and provides age and ability appropriate authoring tools.

SCREM supports student and teacher composition at three levels. First, users easily load and unload existing learning scenarios. These learning scenarios are stored in an XML format that specifies interactive mappings, visual and sonic rendering attributes, typed media objects, and meta-data including the scenario name and date. Second, users can configure new scenarios through the reuse of software elements that are instantiated, destroyed, and modified via a graphical user interface. Third, developers can write new software code modules through a plug-in type architecture that are then made available through the high-level mechanisms described above

Data Archival & Annotation

We have developed a software module with a MySQL database backend to archive all sensing and feedback data in real time. This stored data can be accessed for a number of purposes. First, during a given learning session, students can recall and replay movement passages to reflect on their activities. Second, this data is used to update real time context models that can inform our feedback mechanisms. Finally, this data can be used for evaluation and assessment by providing a detailed view of students’ activities over multiple time scales. For example, we are currently examining the relationships between student movement patterns and sonic feedback to better understand how sound can be used to influence movement in service of more efficient learning.

Share this

Search

SMALLab Learning



Want to bring SMALLab to your school, museum, or learning center? Click for more information!

Partners