<%BANNER%>

The Design and Evaluation of a Mixed Reality Approach to Interactively Blend Dynamic Models with Corresponding Physical ...

Permanent Link: http://ufdc.ufl.edu/UFE0024267/00001

Material Information

Title: The Design and Evaluation of a Mixed Reality Approach to Interactively Blend Dynamic Models with Corresponding Physical Phenomena
Physical Description: 1 online resource (146 p.)
Language: english
Creator: Quarles, John
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2009

Subjects

Subjects / Keywords: anesthesia, augmented, interaction, learning, mixed, modeling, reality, simulation, virtual
Computer and Information Science and Engineering -- Dissertations, Academic -- UF
Genre: Computer Engineering thesis, Ph.D.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: People understand how to interact with the objects in the world around them (e.g., an ATM machine, a car) but most people do not understand how these objects operate internally. Moreover, even with an abstract knowledge (e.g., a schematic) of an object, it still may be difficult to apply this knowledge in the context of the real object. To address this challenge, this work presents an interactive approach called a mixed simulator that superimposes abstract visualizations over the corresponding real objects and improves overall understanding of the objects in the surrounding world. For example, to address this challenge in anesthesia education we engineered the Augmented Anesthesia Machine (AAM) - a mixed simulator that superimposes a dynamic, abstract model of a generic anesthesia machine over a real anesthesia machine. Moreover, the machine and dynamic model are synchronized, enabling students to interact with the model through their physical interaction with the machine, such as turning knobs. Students can then visualize how their physical interactions affect the internal workings (e.g., gas flow dynamics) of the machine, effectively affording students abstract 'x-ray vision'. We evaluated the mixed simulator approach in a formal study that investigated the educational benefits specific to mixed simulators. The study compared mixed simulators to several other types of currently used training simulators. Overall we found that mixed simulators compensated for low spatial cognition and more effectively helped users to transfer their abstract knowledge into real world scenarios. To extend mixed simulators, we engineered a novel immersive visualization approach that enabled students and educators to aggregate, filter, visualize, and review massive amounts of previous student interaction data. An informal study suggested that this immersive approach for after action review (AAR) may give students and educators insight into the elusive thought processes and misconceptions of students. Finally, we generalized mixed simulators in both software and theoretical frameworks for a more effective design and implementation process. The theoretical framework enables engineers to classify and design mixed simulators based on the educational needs of an application. The software framework supports engineers with a code infrastructure and an authoring tool for efficient implementation of mixed simulators.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by John Quarles.
Thesis: Thesis (Ph.D.)--University of Florida, 2009.
Local: Adviser: Lok, Benjamin C.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2009
System ID: UFE0024267:00001

Permanent Link: http://ufdc.ufl.edu/UFE0024267/00001

Material Information

Title: The Design and Evaluation of a Mixed Reality Approach to Interactively Blend Dynamic Models with Corresponding Physical Phenomena
Physical Description: 1 online resource (146 p.)
Language: english
Creator: Quarles, John
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2009

Subjects

Subjects / Keywords: anesthesia, augmented, interaction, learning, mixed, modeling, reality, simulation, virtual
Computer and Information Science and Engineering -- Dissertations, Academic -- UF
Genre: Computer Engineering thesis, Ph.D.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: People understand how to interact with the objects in the world around them (e.g., an ATM machine, a car) but most people do not understand how these objects operate internally. Moreover, even with an abstract knowledge (e.g., a schematic) of an object, it still may be difficult to apply this knowledge in the context of the real object. To address this challenge, this work presents an interactive approach called a mixed simulator that superimposes abstract visualizations over the corresponding real objects and improves overall understanding of the objects in the surrounding world. For example, to address this challenge in anesthesia education we engineered the Augmented Anesthesia Machine (AAM) - a mixed simulator that superimposes a dynamic, abstract model of a generic anesthesia machine over a real anesthesia machine. Moreover, the machine and dynamic model are synchronized, enabling students to interact with the model through their physical interaction with the machine, such as turning knobs. Students can then visualize how their physical interactions affect the internal workings (e.g., gas flow dynamics) of the machine, effectively affording students abstract 'x-ray vision'. We evaluated the mixed simulator approach in a formal study that investigated the educational benefits specific to mixed simulators. The study compared mixed simulators to several other types of currently used training simulators. Overall we found that mixed simulators compensated for low spatial cognition and more effectively helped users to transfer their abstract knowledge into real world scenarios. To extend mixed simulators, we engineered a novel immersive visualization approach that enabled students and educators to aggregate, filter, visualize, and review massive amounts of previous student interaction data. An informal study suggested that this immersive approach for after action review (AAR) may give students and educators insight into the elusive thought processes and misconceptions of students. Finally, we generalized mixed simulators in both software and theoretical frameworks for a more effective design and implementation process. The theoretical framework enables engineers to classify and design mixed simulators based on the educational needs of an application. The software framework supports engineers with a code infrastructure and an authoring tool for efficient implementation of mixed simulators.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by John Quarles.
Thesis: Thesis (Ph.D.)--University of Florida, 2009.
Local: Adviser: Lok, Benjamin C.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2009
System ID: UFE0024267:00001


This item has the following downloads:


Full Text

PAGE 1

1 THE DESIGN AND EVALUATION OF A MIXED REALITY APPROACH TO INTERACTIVELY BLEND DYNAMIC MODELS WITH CORRESPONDING PHYSICAL PHENOMENA By JOHN PATRICK QUARLES A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2009

PAGE 2

2 2009 John Patrick Quarles

PAGE 3

3 To my wife Keira for her love, support, and inspiration

PAGE 4

4 ACKNOWLEDGMENTS I thank Dr. Benjamin Lok for being my research advisor, mentor, and friend. I have learned more from him in the past several years than from all my previous academic experiences. I thank my collaborators, Dr. Paul Fishwick, Dr. Samsun La mpotang, Dr. Ira Fischler David Lizdas, Dr. Cynthia Kaschub, and Isaac Luria for their continued support hard work, and wealth of knowledge. When this dissertation refers to we it refers to me and my collaborators. Without their valuable input, this hig hly multidisciplinary work would never have been possible. Also, I thank Dr. Jeffrey Ho for his continued interest and his input as a member of my committee. Further, I thank all the members of the Virtual Experiences Research Group for their ideas, expert ise, and friendship, especially Dr. Andrew Raij, Dr. Kyle Johnsen, Aaron Kotranza, Brent Rossen, Robert Dickerson, Harold Rodriguez, Joon Hao Chuah, Xiyong Wang, Dr. Yon gho Hw ang, and Lois Cao. I thank the University of Florida for funding me with their Al umni fellowship and the Institute of Simulation and Training at the University of Central Florida for funding me with the Link Foundation Fellowship. Lastly, I thank my family for their never ending support and encouragement: my father C.A.; my mother So nja; my grandmother Hazel ; my sister Jeje and my brother in law Matthew Moore. Lastly, I thank my loving and patient wife Keira and our stress relieving cats Hendrix, Moby and Leeloo. Without the help and support of all those acknowledged here, none of this would have been possible.

PAGE 5

5 TABLE OF CONTENTS page ACKNOWLEDGMENTS .................................................................................................................... 4 LIST OF TABLES ................................................................................................................................ 9 LIST OF FIGURES ............................................................................................................................ 10 ABSTRACT ........................................................................................................................................ 12 CHAPTER 1 INTRODUCTION .......................................................................................................................... 14 1.1 Dr iving Issues ........................................................................................................................ 15 1.1.1 Motivation: Augment Understanding of Black Boxes ............................................. 15 1.1.2 Challenges ................................................................................................................... 16 1.2 Thesis Statement ................................................................................................................... 17 1.3 Overview of Approach ......................................................................................................... 18 1.3.1 Visual and Interaction Collocation: The Augmented Anesthesia Machine (AAM) .............................................................................................................................. 18 1.3.2 Temporal Collocation: Collocated After Action Review (AAR) ............................ 19 1.3.3 Model Generation Collocation: The Mixed Simulator Software Framework ........ 21 1.4 Innovations ............................................................................................................................ 21 2 REVIEW OF LITERATURE ........................................................................................................ 25 2.1 Modeling and Simulation ..................................................................................................... 25 2.1.1 Terminology ............................................................................................................... 25 2.1.2 Visual Modeling ......................................................................................................... 26 2.1.3 Integrative Modeling .................................................................................................. 27 2.1.4 Virtual Reality and Simulation .................................................................................. 28 2.2 Mix ed Reality (MR) .............................................................................................................. 29 2.2.1 Terminology ............................................................................................................... 29 2.2.2 Tracking and Registration Techniques ..................................................................... 30 2.2.3 Tangible User Interfaces ............................................................................................ 31 2.2.4 Magic Lens Display ................................................................................................... 32 2.2.5 Virtual and Mixed Reality for Review of Past Experiences .................................... 33 2.3 Visualization.......................................................................................................................... 34 2.3.1 Immersive Visualization ............................................................................................ 34 2.3.2 Benefits of Immersive Visualization ......................................................................... 35 2.4 Spatial Cognition and Spatial Ability Tests ........................................................................ 36 2.4.1 Working Defini tion .................................................................................................... 36 2.4.2 Spatial Abilities at Different Scales .......................................................................... 36 2.4.2.1 Figural: T he arrow span test ............................................................................ 37 2.4.2.2 Vista: T he perspective taking ability test ....................................................... 37

PAGE 6

6 2.4.2.3 Environmental: N avigation of a virtual environment .................................... 38 2.5 Scaffolded Learning .............................................................................................................. 39 2.5.1 Definition .................................................................................................................... 39 2.5.2 Technology-Mediated Scaffolding ............................................................................ 39 2.5.3 Scaffolding with Abstract and Concrete Representations ....................................... 40 3 DESIGN AND IMPLEMENTATION OF A MIXED SIMULATOR ....................................... 42 3.1. The V irtual Anesthesia Machine (V AM ) and the Anesthesia Machine ........................... 45 3.1.1 The Gas Flowmeters in the Real Anesthesia Machine ............................................ 46 3.1.2 The Gas Flowmeters in the VAM ............................................................................. 46 3.2 Mixed Simulator Design Methodology ............................................................................... 47 3.2.1 Contextualization Method 1 : Real Machine Context (AAM -MR) ......................... 48 3.2.1.1 Visualization with the magic lens ................................................................... 49 3.2.1.2 Interaction......................................................................................................... 49 3.2.1.3 HUD visualization ........................................................................................... 50 3.2.2 Contextualization Method 2: VAM -Context ............................................................ 51 3.2.2.1 Visualization .................................................................................................... 51 3.2.2.2 Interaction......................................................................................................... 52 3.2.3 Transformation Between VAM -Context and Real Machine Context .................... 52 3.3 Implementing a Mixed Simulator ........................................................................................ 54 3.3.1 Visual Contextualization ............................................................................................ 54 3.3.1.2 Visual overlay .................................................................................................. 55 3.3.1.3 Simulation states and data flow ...................................................................... 56 3.3.1.4 Diagrammatic graph arcs between components ............................................ 56 3.3.1.5 The magic lens display see through effect ..................................................... 57 3.3.1.6 Tracking the magic lens display ..................................................................... 58 3.3.2 Interaction Contextualization .................................................................................... 60 3.3.2.1 Using the physical machine as an interface to the dynamic model .............. 60 3.3.2.2 Pen-based interaction ....................................................................................... 61 3.3.3 Hardware ..................................................................................................................... 62 3.4 Chapter Summary ................................................................................................................. 62 3.5 Conclusions ........................................................................................................................... 63 4 EVALUATION OF A MIXED SIMULATOR ............................................................................ 75 4.1 Hypotheses ............................................................................................................................ 76 4.2 Population and Study Environment ..................................................................................... 76 4.3 Study Conditions ................................................................................................................... 76 4.3.1 VAM Group ................................................................................................................ 77 4.3.2 AAM MR Group ........................................................................................................ 77 4.3.3 AAM D Group .......................................................................................................... 77 4.3.4 AM R Group .............................................................................................................. 77 4.3.5 AM D Group .............................................................................................................. 78 4.4 Study Procedure .................................................................................................................... 78 4.4.1 Day 1 (~90 Minute S ession) ...................................................................................... 78 4.4.2 Day 2 (~60 Minute Session) ...................................................................................... 79

PAGE 7

7 4.5 Metrics ................................................................................................................................... 80 4. 6 Results and Discussion ......................................................................................................... 81 4.6.1 Learning Outcomes Results ....................................................................................... 82 4.6.1.2 Results of abstract concept understanding metrics: M achin e component identification test, machine component function test, short answer anesthesia machine test, multiple choice anesthesia machine test .......................................... 83 4.6.1.3 Results of concrete understanding: F a ult tests ............................................... 84 4.6.1.4 Results of matching test results ...................................................................... 85 4.6.1.5 Results of self reported difficulty in visualizing gas flow (DV GF) ............. 85 4.6.1.6 Discussion of proposed usage of simulations for scaffolding ...................... 86 4.6.1.7 Discussion of the scaffolding benefits of mixed simulators ......................... 87 4.6.2 Learning Outcomes Correlation to Spatial Cognition.............................................. 88 4.6.2.1 Discussion of DVGF correlation to spatia l cognition ................................... 88 4.6.2.2 Discussion of abstract concept understanding correlation to spatial cognition.................................................................................................................... 89 4.6.2.3 Discussion of ma tching correlation to spatial cognition ............................... 90 4.7 Chapter Summary ................................................................................................................. 90 4.8 Conclusions ........................................................................................................................... 91 5 IMMERSIVE VISUALIZATION OF LARGE DATA SETS WITH MIXED SIMULATORS ........................................................................................................................... 96 5.1 Mixed Simulators for Immersive Visualization of Past Experiences ................................ 97 5.2 After Action Review ............................................................................................................. 99 5.2.1 After Action Review Systems ................................................................................. 100 5.2.2 Video-Based AAR in Education ............................................................................. 101 5.3 Enabling Immersive Visualization for Students with the Augmented Anesthesia Machine Visualization and Interactive Debriefing System (AAMVID ) .......................... 101 5.3.1 Visualizing Past Experiences in Context with the Real World ............................. 102 5.3.2 Logging Student and Expert Interaction ................................................................. 102 5.3.3 Abstract Visualization of Machine Faults .............................................................. 103 5.3.4 Event Chain Visualization ....................................................................................... 103 5.3.5 Playback: Manipulating Virtual Time ..................................................................... 105 5.3.6 Lookat Indicator ...................................................................................................... 105 5.3.7 Viewing Modes ........................................................................................................ 106 5.3.7.1 User view mode ............................................................................................. 106 5.3.7.2 Expert view mode .......................................................................................... 106 5.3.7.3 Expert tutorial mode ...................................................................................... 107 5.4 AAMVID -S tudent (AAMVID -S) Usability Study ........................................................... 107 5.4.1 Study Procedure ........................................................................................................ 108 5.4.1.1 D ay 1 (~90 minute session) ........................................................................... 108 5.4.1.2 Day 2 (~90 minute session) ........................................................................... 108 5.4.2 Metrics ...................................................................................................................... 109 5.4.3 Results ....................................................................................................................... 109 5.4.3.1 Discussion of understanding ......................................................................... 109 5.4.3.2 Discussion of confidence .............................................................................. 110 5.4.3.3 Discussion of subjective benefits .................................................................. 110

PAGE 8

8 5.5 Immersive Visualization of Large Data Sets with AAMVID for Educators .................. 111 5.5.1 Gaze Maps ................................................................................................................ 112 5.5.2 Implementation of Gaze Maps ................................................................................ 112 5.5.3 Markov Model of Class Interaction ........................................................................ 113 5.5.3.1 Implementation .............................................................................................. 114 5.5.3.2 Student s imulation mode ............................................................................... 114 5.5.6 Data Filtering ............................................................................................................ 115 5.6 Expert Evaluation of AAMVID for Educators ................................................................. 115 5.6.1 Evaluation Procedure ............................................................................................... 116 5.6.2 Discussion ................................................................................................................. 116 5.7 Chapter Summary ............................................................................................................... 118 5.8 Conclusions ......................................................................................................................... 119 6 THEORETICAL AND SOFTWARE FRAMEWORKS FOR MIXED SIMULATORS ....... 124 6.1 Theoretical Framework ....................................................................................................... 124 6.1.2 The ScaffoldingSpace Continuum ......................................................................... 125 6.1.2.1 Virtuality continuum ...................................................................................... 125 6.1.2.2 Information c ontinuum .................................................................................. 126 6.1.2.3 Interaction continuum .................................................................................... 126 6.1.3 Differences Between Continuums ........................................................................... 127 6.1.4 Scaffolding by Movement Along Continuums ....................................................... 127 6.1.5 Example Design Process: A n Augmented CRT Monitor ...................................... 128 6.2 Software Framework ........................................................................................................... 129 6.2.1 Generalizing Mixed Simulators ............................................................................... 130 6.2.2 Semantic Network Based Architecture ................................................................... 131 6.2.3 Authoring Tool ......................................................................................................... 132 6.2.4 Integration with Renderers and Simulators ............................................................ 133 6.3 Chapter Summary ............................................................................................................... 133 6.4 Conclusions ......................................................................................................................... 134 7 SUMMARY AND FUTURE DIRECTIONS ............................................................................. 137 7.1 Summary.............................................................................................................................. 137 7.2 Future Directions ................................................................................................................. 137 REFERENCES ................................................................................................................................. 139 BIOGRAPHICAL SKETCH ........................................................................................................... 146

PAGE 9

9 LIST OF TABLES Table page 3 1 M ethods of tracking various machine components .............................................................. 73 4 1 Differences in compared simulations .................................................................................... 92 4 2 Study c onditions and metrics used per semester .................................................................. 92 4 3 Time to complete the 5 training exercises results (first semester) ...................................... 92 4 4 Time to complete 5 exercises u nivariate ANOVA tests (pair -wise differences shown) ... 92 4 5 Machine component identification test results ..................................................................... 93 4 6 Machine component function test results ............................................................................. 93 4 7 Short a nswer a nesthesia m achine t est r esults ....................................................................... 93 4 8 Abstract c oncept u nderstanding u nivariate ANOVA tests (pair -wise differences shown) ..................................................................................................................................... 93 4 9 First semester fault test results .............................................................................................. 94 4 10 Fault test c hi -squared test results .......................................................................................... 94 4 11 Matching t est r esults .............................................................................................................. 94 4 12 Matching t est u nivariate ANOVA results (pair -wise results) ............................................. 94 4 13 Self r eported d ifficulty in v isualizing g as f low (DVGF) ..................................................... 95 4 14 Analysis of DVGF v ariance ( u nivariate ANOVA with pair -wise differences) ................. 95 4 15 DVGF c orrelations to s patial c ognition t ests ........................................................................ 95 4 16 Written t est t cores correlations to s patial c ognition t ests .................................................... 95 4 17 Matching c orrelations to a rrow s pan t est .............................................................................. 95

PAGE 10

10 LIST OF FIGURES Figure page 1 1 The virtual anesthesia machine (VAM) ................................................................................ 23 1 2 The augmented anesthesia machine (AAM) ....................................................................... 23 1 3 The augmented anesthesia machine visualization and interactive debriefing (AAMVID) s ystem offers an immersive, aggregate visualization of gaze data ................ 24 2 1 The virtuality continuum ...................................................................................................... 41 3 1 The virtual anesthesia machine ............................................................................................. 64 3 2 Mapping between the physical machine and the VAM. ..................................................... 64 3 3 The augmented anesthesia machine ...................................................................................... 65 3 4 A magnified view of the gas flowmeters on the real machine. ........................................... 65 3 5 A magnified view of the gas flow knobs and bobbins in the VAM. ................................... 66 3 6 The VAM is spatially reorganized to align with the real machine. .................................... 66 3 7 The users view of the flowmeters in the Augmented Anesthesia Machine ( AAM ). ........ 67 3 8 The real view and the magic lens view of the machine shown from the same viewpoint ................................................................................................................................ 67 3 9 A user turns the N2O knob on the real machine and visualizes how this interaction affects the overlaid VAM mod el. .......................................................................................... 68 3 10 The augmented anesthesia machine heads up display. ........................................................ 68 3 11 The real machine is spatially reorganized to align with t he VAM. .................................... 69 3 12 VAM Context interaction .................................................................................................... 70 3 13 Geometric transformation between the Real Machine Context and VAM Context. ......... 70 3 14 Schematic diagram of the AAM hardware implementation. ............................................... 71 3 15 Transforming a 2D VAM component to contextualized 3D. .............................................. 71 3 16 The three states of the mechanical ventilator controls. ........................................................ 71 3 17 The pipes between the components represent the diagrammatic gra ph arcs.. .................... 72

PAGE 11

11 3 18 A diagram of the magic lens tracking system. ..................................................................... 72 3 19 T he 2D tracking output for the anesthesia machines knobs and buttons.. ....................... 73 3 20 The augmented Apollo anesthesia machine. ....................................................................... 74 4 1 Average function understanding vs. level of abstraction. .................................................... 94 5 1 Student view in Augmented Anesthesia Machine Visualization and Interactive Debriefing system (AAMVID) .......................................................................................... 119 5 2 Real -wor ld view of a user touching an incompetent inspiratory valve and the corresponding AAMVID view of an incompetent inspiratory valve during AAR. ......... 120 5 3 Past interaction event boxes are collocate d with the real controls and describe past interactions. ........................................................................................................................... 120 5 4 The student can see what an expert was looking at, denoted by the large red spotlight 1 21 5 5 Understanding before and after collocated after action review (AAR) (p < .001). Standard error bars are shown. ............................................................................................ 121 5 6 Confidence before and after collocated AA R (p < .001). Standard error bars are shown. ................................................................................................................................... 121 5 7 A gaze map collocated with the machine. .......................................................................... 122 5 8 A heat -mapped (on frequency of interaction), directed graph of aggregate student interaction. ............................................................................................................................ 122 5 9 The interaction graph is collocated with the machine. ...................................................... 123 6 1 The three continuums that make up the Scaffolding -Space Continuum. ......................... 134 6 2 The components of the augmented c athode ray t ube ( CRT ) monitor. ............................. 135 6 3 The existing CRT monitor components are mapped to the scaffolding space continuum. ............................................................................................................................ 135 6 4 The augmented CRT monitor that was created using the software framework and authoring tool. ....................................................................................................................... 136 6 5 Semantic links are visualized by 3D red lines .................................................................. 136

PAGE 12

12 Abstract of Dissertation Presented to the Graduate School of The University O f Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy THE DESIGN AND EVALUATION OF A MIXED REALITY APPROACH TO INTERACTIVELY BLEND DYNAMIC MODELS WITH CORRESPONDING PHYSICAL PHENOMENA By John Patrick Q uarles May 2009 Chair: Benjamin Lok Major: Computer Engineering People understand how to interact with the objects in the world around them ( e.g., an ATM machine, a car) but most people do not understand how these objects operate internally. Moreover, e ven with an abstract knowledge ( e.g., a schematic) of an object, it still may be difficult to apply this knowledge in the context of the real object. To address this challenge, this work presents an interactive approach called a mixed simulator that superi mposes abstract visualizations over the corresponding real objects and improves overall understanding of the objects in the surrounding world. For example, to address this challenge in anesthesia education we engineered the Augmented Anesthesia Machine (AAM) a mixed simulator that superimposes a dynamic, abstract model of a generic anesthesia machine over a real anesthesia machine. Moreover, the machine and dynamic model are synchronized, enabling students to interact with the model through their physical interaction with the machine, such as turning knobs. Students can then visualize how their physical interactions affect the internal workings ( e.g., gas flow dynamics) of the machine, effectively affording students abstract x ray vision.

PAGE 13

13 We evaluated t he mixed simulator approach in a formal study that investigated the educational benefits specific to mixed simulators. The study compared mixed simulators to several other types of currently used training simulators. Overall we found that mixed simulators compensated for low spatial cognition and more effectively helped users to transfer their abstract knowledge into real world scenarios To extend mixed simulators, we engineered a novel immersive visualization approach that enabled students and educators t o aggregate, filter, visualize, and review massive amounts of previous student interaction data. An informal study suggested that this immersive approach for after action review (AAR) may give students and educators insight into the elusive thought process es and misconceptions of students. Finally, we generalized mixed simulators in both software and theoretical frameworks for a more effective design and implementation process. The theoretical framework enables engineers to classify and design mixed simulat ors based on the educational needs of an application. The software framework supports engineers with a code infrastructure and an authoring tool for efficient implementation of mixed simulators.

PAGE 14

14 CHAPTER 1 INTRODUCTION Within the domain of simulation, the re are two important types of traditionally disparate simulation: Physical Simulation and Abstract Simulation [16] These two types of simulation have both been shown to enhance the training of different skills. For example, Physical Simulations attempt to replicate reality at the highest fidelity possible ( e.g., flight s imulators). These physical simulators have been shown to be effective for training procedural skills ( e.g., piloting a plane). In contrast, abstract simulation simplifies the corresponding physical phenomenons spatial relationships and photorealism. Abstr act simulation visualizes the underlying the dynamic models that drive the simulation, an approach that has been effective for training abstract concepts ( e.g., how a plane engine operates internally). These two types of simulation both teach different yet vital concepts. However, in many applications, it is also important for trainees to combine knowledge from both types of simulation to understand how physical procedures impact the dynamics of the abstract models. The goal of this research is to design a nd evaluate a novel approach to merge these two disparate types of simulation and leverage the benefits of both. The resulting simulation can be considered a mixed simulator. In a mixed simulator, the components of the abstract simulation are superimposed over the corresponding real world phenomena, giving the user an abstracted visualization of the internal dynamic model. Then users can interact with the real phenomena and visualize how their interactions affect the internal abstract dynamic model. The goa l of a mixed simulator is to enable the user to mentally understand the connection between abstract concepts and the corresponding physical procedures and components. This dissertation presents our innovations toward this goal with the following:

PAGE 15

15 Design a nd Implementation of 4 mixed simulators Evaluation of the Mixed Simulators benefits to learning as compared to other types of simulation Extension of the mixed simulator for immersive visualization of past experiences ( e.g., debriefing) Generic Theoretica l and Software Frameworks for s oftware e ngineers to efficiently design and implement m ixed s imulators from existing abstract simulations and physical phenomena. 1.1 Driving Issues 1.1.1 Motivation: Augment Understanding of Black Boxes Most of the objects t hat surround us in our everyday life can be considered black boxes. That is, we understand input ( e.g., pressing buttons on an ATM machine) and output ( e.g., then money comes out of the machine), but we dont understand the internal workings because they a re not visible. Even if we could view the internal workings, they may be too complex to understand how the black box (the ATM machine) works internally. In the case of the ATM machine, it is sufficient to understand only the inputs and outputs of the black box. However, in many high risk applications, such as military and medical applications, it is vital to understand how ones physical interactions with a black box ( e.g., operating on a human body) affect the internal workings ( e.g., the internal organs, blood flows). For example, 75% of all anesthesia related operating room accidents that result in death or brain damage are due to user error, rather than equipment failure [12] In this application, anesthesia providers must be able to interact with the anesthesia machine, but also be a ble to troubleshoot problems within the machine in emergent situations. These problems may not be visible and often must be solved in a timely manner. That is, in many applications it is vital to understand how input affects the internal functions of black boxes. However, even if we could look inside an anesthesia machine, the internal workings are so complex that it would still be

PAGE 16

16 difficult to understand their function. Moreover, the gasses are invisible, which would make understanding gas flow dynamics extremely challenging. This challenge is partly addressed with abstract simulation. Abstract simulation has been shown to aid in the understanding of abstract concepts (such as the internal workings of black boxes). They visualize a simplified representation of the underlying dynamic models ( e.g., the [36, 72] VAM in Figure 1 1). These models are non-photorealistic and the spatial relationships between the internal components are laid out in a convenient, visually simple way, which makes it easier to understand the inter nal workings. However, these simplifications also make it more difficult for many users to apply these abstract concepts in a real world scenario. This is the main problem that the presented mixed simulator addresses. 1.1.2 Challenges There are three areas of computer science resea rch that this work impacts: 1) modeling and simulation, 2) v isualization, and 3) m ixed reality ( MR). For each of these areas, mixed simulators address specific challenges. These challenges are described here. One of the recent gr and challenges of m odeling and s imulation is i ntegrative m odeling [17] Classically, dynamic models are run in the background to drive simulation visualizations. With integrative modeling, abstract representations of these models ( e.g., a directed graph for a finite state machine) are rendered as part of the simulation visualization. That is, the model s structure, the models functional ity, the resulting visualization, and the interface to the model and the visualization are all integrated into the same visual and interactive context. The challenge is the integration. The ultimate goal is to make integrative modeling efficient, generaliz able, and understandable to the user. To address this goal, we developed a MR -based approach ( i.e., mixed simulator) for integrative modeling that visually and interactively combines dynamic models and corresponding physical phenomena in one collocated spa ce.

PAGE 17

17 One of the main chall enges presented by the area of v isualization is the visualization of massive datasets in a perceptible way. Often, real world datasets include massive databases with millions of entries. Humans inherently have difficulty synthesizing these large amounts of raw data [64] The goal of visualization is to structure information ( e.g., numerical data tables) into a visual format that offers users insight into the trends and patterns in the data. To address this challenge and build upon th e previous work of visualization, we developed a mixed simulator based approach to facilitate immersive visualization of large data sets. One of the main challenges of MR is to identify the benefits of MR technology. In the last ten years, there have been many advances and innovations in MR displays and interfaces, but it is not known how this technology really impacts the enduser. It is a challenge to 1) identify the metrics that should be used for evaluating this impact and 2) design studies that compens ate for the errors present in the technology ( e.g., sensor accuracy). That is, it is difficult to accurately evaluate MR technology, since the tracking and registration errors could potentially propagate to the cognitive and perceptual functions of the use r, thereby impacting user interaction. However, our study attempts to address this challenge and formally evaluate the MR -based mixed simulator with the ultimate goal of determining the benefits of MR in general. 1.2 Thesis Statement The merging of abstrac t dynamic models and their corresponding physical phenomena can be enabled through mixed realitys collocation of real and virtual environments in space and time. This merging allows users to visualize, interact with, and review abstract concepts in the co ntext of the real world, thereby compensating for limitations in spatial cognition and enabling users to more effectively transfer their abstract knowledge into the corresponding real world scenario than current methods of simulation-based training.

PAGE 18

18 1.3 Overview of Approach To evaluate this thesis, the following work was conducted. First, we designed the concept of a mixed simulator, which performs MR based visual and interaction collocation of dynamic models and corresponding physical phenomena. We impleme nted four instances ( i.e., three different anesthesia machines and a mannequin patient simulator) of our design for the application of medical training simulation and formally evaluated our approach. Then we extended the mixed simulator for immersive visua lization of large datasets in the application of after action review (AAR). This extension included a logging system and immersive visualization tools for review of and interaction with one or more past ( e.g., logged) experiences. We call this approach tem poral collocation. We conducted an evaluation of mixed simulator based temporal collocation to determine the potential advantages of mixed simulators for immersive visualization of large data sets. Finally, we engineered generic theoretical and software fr ameworks that enable engineers ( i.e., software engineers who are inexperienced with MR) to efficiently design and implement mixed simulators from their existing abstract and physical simulation models. We call this approach model generation collocation 1. 3.1 Visual and Interaction Collocation: The Augmented Anesthesia Machine (AAM) Different types of simulation train different types of knowledge. That is, abstract simulations ( e.g., the VAM in Figure 1 1) train abstract knowledge ( e.g., gas flow dynamics) whereas physical simulations ( e.g., a real anesthesia machine) train concrete knowledge ( e.g., procedures, psychomotor skills). However, applying abstract concepts in concrete domains can be a difficult learning challenge. To address this challenge, we cr eated an MR -based approach that utilizes visual and interactive collocation of abstract simulations and their corresponding physical counterparts.

PAGE 19

19 For an example implementation of our approach, we created an Augmented Anesthesia Machine (AAM) ( Figure 1 2) that visually and interactively collocated the abstract VAM simulation with a physical machine an Ohmeda Modulus II. For visual collocation, the animated VAM components and the simulation visualization of the gas flow dynamics were superimposed over the physical machine. Users interacted with a tracked, 6DOF see through window called a magic lens to view the overlaid simulation from a first person perspective. For interaction collocation, users physically interacted with the real machine as a tangible int erface to the abstract simulation. This enabled users to abstractly visualize how their physical interactions impacted the internal workings of the machine. To evaluate how our approach impacted learning, we conducted a between subjects study with 130 part icipants in which we compared AAM -based training to four other types of simulation based training. We compared training in the following simulations: (1) the AAM, (2) the VAM, (3) the real machine without any additional simulation, (4) a desktopbased vers ion of the AAM, and (5) a desktop-based photorealistic simulation of the real machine. Results suggested that the AAMs visual and interaction collocation compensated for low spatial cognition, enabled users to merge abstract concepts with concrete knowled ge, and improved overall training transfer to real world scenarios. 1.3.2 Temporal Collocation: Collocated After Action Review (AAR) Humanly perceptible visualization of large data sets is a grand challenge in the area of visualization. We investigated th e potential advantages of mixed simulators in immersive information visualization of large data sets. To conduct our investigation, we imp lemented a mixed simulator for a fter a ction r eview (debriefing), which a widely used educational practice that affords insight into strengths and weaknesses through the review of multiple past experiences [13] ( e.g., watching a video of ones self or others during a past training exercise).

PAGE 20

20 However, these types of systems face the following challenges: (1) it is difficult for review ers to know where to focus their attention. That is, given an entire interaction over a period of time, reviewers may not focus on the relevant parts of the played back experience. (2) It is difficult for reviewers to synthesize many reviews for a much nee ded aggregate or meta -review. (3) Reviewers are often spatially displaced from the training area, such as in another room or at home. This spatial displacement may make spatial cognition more challenging during the review due to a lack of real world conte xt. We designed a mixed simulator -based after action review system that addresses these challenges with a novel immersive visualization system, called the Augmented Anesthesia Machine Visualization and Interactive Debriefing (AAMVID) system. AAMVID ( Figure 1 3) offers the following innovations for focus and context immersive visualization: abstract collocated after action review ( i.e., registered self -debriefing inside the training space augmented with abstract visualization), automatic review annotation ( e.g., automatic indexing of interactions), and immersive aggregate visualization of large data sets ( e.g., gaze and interaction da ta from a class of students) We evaluated the benefits of AAMVID in a pilot study with 19 psychology students who tested the usability of the system and 3 anesthesia education experts who performed an informal review of the systems capabilities. Overall, results suggested that AAMVID was a viable after action review system and the immersive, aggregate visualizations and novel interaction techniques afforded insight into the elusive though processes of students. This suggests that our mixed simulator -based focus + context [64] approach ( i.e., visualizing abstract information in the context of the real world) may offer a novel enhancement to immersive information visualization in general.

PAGE 21

21 1.3.3 Model Generation Collocation: The Mixed Simulator Software Framework Although mixed simulators could benefit many current simulations most simulation engineers do not have the time or experti se to create mixed simulators from their existing simulations. To address this challenge, we created theoretical and software frameworks and a simple authoring tool to enable simulation engineers ( e.g., engineers who may be inexperienced with MR) to more e fficiently design and implement mixed -simulators from their 2D abstract simulations. The frameworks enable an interactive design process for mixed simulator applications that aids engineers in designing and implementing the semantic network that exists inh erently between abstract and physical simulators ( i.e., abstract simulators are an abstraction of physical phenomena). This enables the engineers to spend more time implementing the simulation models and less time struggling with conversion to a MR -based v isualization. Ultimate ly, these generic theoretical and software frameworks demonstrate the generality of the mixed simulator approach. 1.4 Innovations Our innovations were: Designing and evaluating a novel MR -based approach to combine abstract and physic al simulation. Proposing and implementing mixed simulator -based immersive visualization of large datasets for the application of after action review Creating theoretical and software frameworks to make design and implementation of mixed simulators more ef ficient and generalized. This work effectively integrates abstract simulations with corresponding physical phenomena. These mixed simulators leverage the realism already present in complex real objects (e.g., the physical components and controls of an anes thesia mac h ine), but abstract the complexities of the internals for ease of understanding. Our studies suggest that that mixed

PAGE 22

22 simulators improve training -transfer more than several other current types of interactive simulation visualization approaches in medical training. Furthermore, this work presents the concept of enhancing immersive information visualization with mixed simulators by providing additional real world context to abstract visualization. To investigate this, we proposed and implemented a mi xed simulator based collocated after action review system for immersive visualization of past experiences. Unlike traditional video -based after action review, collocated after action review can be performed in situ with the training area using MR technolog y. The results of a pilot study suggest that collocated after action review is a viable approach to after action review and potentially aids educators in identifying trends in large classes of students. This suggests that mixed simulators offer a novel enh ancement for immersive information visualization of large data sets. Lastly, we created a theoretical framework and an extensible software framework as well as an authoring to enable efficient design and implementation of mixed simulators. The frameworks o ffer engineers an iterative design process and the ability to interactively define a semantic network that effectively describes the relationship between abstract and physical objects in a mixed simulator. Using this semantic network, simulation engineers can efficiently integrate their models with both MR and other interactive simulation approaches. These frameworks may make the design and implementation of mixed simulators more efficient and lay the foundation for higher -level mixed simulator authoring t ools and applications, potentially benefiting simulation engineers who are not MR technicians. These frameworks demonstrate the generalizability of the mixed simulator approach.

PAGE 23

23 Figure 1 1 The v irtual a nesthesia machine (VAM) A n Adobe Director (Shock wave) based abstract simulation of a generic anesthesia machine [Reprinted with permission from Lampotang, S. 2009. University of Florida Department of Anesthesiology. The Virtual Anesthesia Machine Copyright 2009 University of Florida Retrieved Februar y 6, 2009 from http://vam.anest.ufl.edu ] A B Figure 1 2 The augmented anesthesia machine (AAM) A) The diagrammatic VAM icons are superimposed over a model of an anesthesia machine. B) A student uses the magic lens to visualize the VAM superimposed over the real machine.

PAGE 24

24 Figure 1 3 The Augmented anesthesia machine visualization and interactive debriefing (AAMVID) s ystem offers an immersive, aggregate visualization of gaze data.

PAGE 25

25 CHAPTER 2 REVIEW OF LITERATURE This chapter discusses the literature relevant to the goal of this work, the combination of abstract and physical simulations. This chapter is divided into three sections representing the main areas that our work expands upon: modeling and si mulation, mixed reality (MR) visualization, spatial cognition, and technology mediated scaffolding. The first section focuses on modeling and simulation, how the field has evolved from its early computational foundations to a new visual modeling paradigm, and grand challenges for human -computer interfaces (HCI) in modeling and simulation that the mixed simulator addresses. The second section reviews mixed reality: the interfaces and displays prevalent in the field and techniques for review of past experienc es. The third section focuses on immersive visualization approaches and the associated cognitive and perceptual benefits of immersive visualization. Finally, other relevant literature from psychology and education will be described. 2.1 Modeling and Simula tion In order to frame the current innovations in mixed simulators, this section will review the history of modeling and simulation that led to the development of mixed simulators. This section will first review the terminology used throughout this dissert ation. Then it will describe the evolution of modeling and simulation from its beginnings in mathematically driven simulation to more visual modeling paradigms and the eventual incorporation of HCI techniques as in integrative modeling. The goal of this se ction is to describe how mixed simulators build upon this relatively new concept of integrative modeling. 2.1.1 Terminology Since the inception of modeling and simulation in the 1960s, the definition of the term model has been increasingly overloaded. In f oundational modeling and simulation, to model is

PAGE 26

26 to abstract from reality a description of a dynamic system [16] Generally, these simulation models are represented in program code (for numerous ex amples see [38] ) or mathematic al equations [3]. However, the term model can also refer specifically to descriptions of geometry. This usage is especially prevalent in domains outside of modeling and simulation such as in computer graphics, mixed reality, and visua lization. Often in these fields, the term model is used interchangeably with the term geometric model. These models are often associated with 3D modeling programs such as Maya or Blender. For example, a common format for geometric models is a file that con tains a list of triangles approximating a 3D surface. Throughout the rest of this text, the term model may be used interchangeably to mean either simulation model ( i.e., dynamic model) or geometric model ( i.e., 3D model). As in modeling and simulation lite rature, the reader can derive the appropriate definition from the context of its usage. 2.1.2 Visual Modeling Near the end of the 1970s, there was a paradigm shift from traditional programmatic and mathematical modeling. Modeli ng languages, such as GASP [56] began to incorporate more interactive computer graphics and animation in simulat ions. For example, GASP IV incorporated model diagrams that could easily be translated into GASP code. This was one of the earlier efforts to merge simulation programming with visual modeling. The success of languages like GASP IV resulted in a shift in fo cus from programmatic modeling to visual modeling. A good repository of visual model types can be found in [16] Models types such as Petri n ets, f unctional b lock models, state machines, and system dynamics models are used in many different types of simulations and can be represented in a visual way. They are similar in appearance to a flow chart that non -programmers and non-mathematicians can understand and use.

PAGE 27

27 This shift to visual modeling made modeling tools more accessible and usable for modelers across the field of simulation. For example, Dymola [52] and Modelica [45] are languages that support real -time modeling and simulation of electromechanical systems. Dymola and Modelica both support continuous modeling that evolved from analog computation [10]. Thus, Dymola and Modelica users create visual continuous models in the form of bond graphs, using sinks, power sources and energy flows as visual modeling tools. Pidd [55] outlines major principles that can aid in designing a discrete event modeling editor with high usability and acceptance by users. According to Pidd, the most usable interfaces are simple, intuitive, disallowing of dangerous behavior, and offer the user i nstant and intelligible feedback in the event of an error. These principles are derived from more general HCI principles presented in [50] and supported by theories about learning and co gnitive psychology [32] 2.1.3 Integrative Modeling Although M&S has adopted some HCI methodologies to aid in the creation of models and modeling tools, minimal research has been conducted in effectively integrating user interfaces and visualization into the models. Integrative modeling [17, 53, 63] is an emerging field that addresses these issues. The goal of integrati ve modeling is to blend abstract model representations with more concrete representations, such as a geometric representation. This blending is achieved through a combination of HCI, Visualization, and Simulation techniques. Novel interfaces are incorporat ed as part of the simulation model, helping the user to visualize how the various representations are related. For example, [53] used morphing as a way to visually connect a functi onal block model of the dynamics of aircraft communication to the 3D aircraft configurations during flight. That work served as a preliminary study into the use of ontologies for generating one particular domain model integration. Mixed simulators build up on this

PAGE 28

28 concept of integrative modeling by using mixed reality as a means for relating the real world objects to various representations of dynamic and geometric models. Hopkins and Fishwick [22] presented a software framework called RUBE that enabled multi -model integration. In rube, the modeler could interactively connect many different types of models such as finite state machines, functional block models, and P etri nets. Then the representation of the models could be interactively changed t o use a different metaphor, while the underlying model stays the same. The rube framework is a conceptual basis for our software framework presented in Chapter 6. Our software framework extends the concepts set forth in RUBE by defining semantic networks a s a description of the relationship between abstract geometric models and physical geometric models and real objects. 2.1.4 Virtual Reality and Simulation Virtual r eality [68] (VR) is a related field that addresses some of the aforementioned HCI issues. For example, VR has been utilized to address ergonomics challenges [76] Many VR applications in modeling and simulation are outlined in [4]. In [44] the authors identif y the inefficiencies of typical VR systems such as a high communication load due to concurrent simulations a nd a lack of modularity. When integrating simulation, the authors propose a unifying communication framework for linking simulation and VR based on object -oriented design that reduces the overall amount of message passing between simulation nodes. They als o pose a challenge to HCI and VR researchers to create interfaces for the design of such communication minimizing environments. The authors in [20] build upon this by using VR as a 3D authoring tool for simulation models. The software framework presented in Chapter 6 extend these ideas by offering methods of designing mixed sim ulators and proposes semantic networks as a linking data structure between disparate visualization and simulation types such as abstract and physical simulations.

PAGE 29

29 2.2 Mixed Reality (MR) Mixed reality (MR) is a field that evolved from virtual reality. While virtual reality tries to effectively simulate all aspects of physical reality in a purely virtual environment, mixed reality visually and interactively integrates virtual objects with the immediate surrounding real environment. To aid in performing this i ntegration, mixed reality research has pioneered tracking and registration techniques, displays, interaction methods, and developed applications of these methods to areas such as review of past experiences. The purpose of this section is to overview the te chnology and relevant applications that enabled the development of mixed simulators. 2.2.1 Terminology In 1994, Milgram and K ishino [48] laid the framework for new area of virtual reality research called MR that takes a different approach to interaction and visualization. Instead of simulating a purely virtual world, MR systems visually and interactively combine the virt ual world with the real world. In MR, users can visualize some virtual objects but they can also see the real world and interact with real objects. Often the real objects are tracked and are used as interfaces to the virtual world. Then, by interacting wit h real objects, users can interact with the virtual world. However, virtual worlds can be combined with the real world in many different ways. When developing MR applications, developers must decide how much of the environment should be virtual and how muc h of it should be real. To aid in this critical decision, Milgram and Kishino proposed the Virtuality Continuum ( Figure 2 1). The continuum spans from real environments, where all objects have an actual objective existence, to virtual environments, where a ll objects are virtual or simulated. MR encompasses the area of the continuum in between the completely real and the completely virtual. Along with this continuum, Milgram presents a taxonomy of the different categories in which MR can mix virtual and re al objects in a m ixed

PAGE 30

30 e nvironment. The authors laid out two main categories of combining the virtual and real worlds in MR: Augmented Reality (AR): Virtual objects and information are superimposed into the real world so that they appear to be real, often by means of superimposing 3D graphics over a live video stream. AR systems use overlaid virtual information to augment the users perception of the real world. I n an AR environment, users view and interact with a higher proportion of real objects than virt ual objects. Augmented Virtuality (AV): Real world objects or information are superimposed into the visualization of a virtual world, often by means of some type of 3D reconstruction of a real object. Thus, AV systems integrate real objects into the virtua l world to augment the users perception of the virtual world. In an AV environment, users view and interact with a higher proportion of virtual objects than real objects. 2.2.2 Tracking and Registration Techniques To support the development of mixe d envir onments proposed in the V irtuality C ontinuum, much research has been conducted on performing the seamless integration of real and virtual objects. Specifically tracking and registration research focuses on solving the problem of accurately aligning virtual objects with real objects so that they appear to exist in the same space [2]. One approach to registration is to affix fiducial markers to the real o bjects in the scene. There are many approaches to tracking fiducial markers such as the ARToolkit [31] approach or using stereo images to track retro -reflective IR markers [73] Azuma overviewed the state of augmented reality technology in 1997. He cites several of the major application domains and gives specific examples in each, such as ultrasou nd imaging, computer repair, and military applications. He surveys the current (at the time) set of problems in augmented and mixed reality such as video vs. optical see through, registration issues, and sensor accuracy problems. Many of

PAGE 31

31 these techniques w ere used to enable the implementation of mixed simulators. Without these techniques, abstract visualizations could not easily be integrated with the physical simulations in the context of the surrounding real world environment. 2.2.3 Tangible User Interfaces Although much previous work was largely focused on tracking displaying virtual objects in the context of the real world, tangible interface research focuses on approaches for using real objects to facilitate human interaction with virtual objects. A tan gible interface [25] is an interface that employs real objects as: both representations and controls for computational media [71] For example, a classic interface for a computer simulation is a g raphical u ser i nterface (GUI) in which the user clicks on buttons and sliders etc. to control the simulation. The main purpose of a GUI is for control. Like a G UI, a tangible user interface (TUI) is used for control of the simulation, but the TUI is also an integral part of that simulation often a part of the phenomenon being simulated. Rather than just being a simulation control, a TUI also represents a virtua l object that is part of the simulation. In this way, interacting with the real object ( i.e., a real anesthesia machine) facilitates interaction with both the real world and the virtual world at the same time. For example, NASA engineers performed a virtu al asse mbly using real tools in MR [41, 42] Through interacting with a r eal tool as a tangible interface, they were able to interact with the virtual objects and complete the assembly. Bui lding upon this previous work, a mixed simulator utilizes the physical simulator as a tangible interface to the abstract simulation. To eva luate the benefits of tangible interfaces, Ware and Rose [75] performed several experiments that investi gated the impact of using real objects to rotate virtual objects. They looked at the impact of two handed and one -handed interaction, mismatch in position, mismatch

PAGE 32

32 in shape, and visual feedback. Overall, they found that it was helpful for time and accurac y to have ones hands in -situ with the object space. Furthermore, Insko et al [24] conducted two studies, which looked at the impact of tangible interfaces in virtual environments on presence and training transfer. In the first study, participants experienced a virtual pit in which the user walked around a high narrow ledge above a room far below them. In the study, some participants interacted with a passive haptic led ge while others did not. Passive haptics were found to significantly increase presence and overall avoidance behavior. The second study involved participants navigating through a room with different sized objects at different distances. They trained using VR with or without passive haptics (i.e., the real objects were registered to the virtual ones ). Passive haptics enhanced the user s ability to navigate and interact with the real environment and changed their per ception of distance within the virtual environment. Much of these previous study results served as motivation for the design of mixed simulators as tangible interfaces. 2.2.4 Magic Lens Display All the mixed simulators presented in this dissertation are implemented with a hand held 6DOF tracked dis play a magic lens as the main display device. Magic Lenses were originally created as 2D interfaces [5]. 2D magic lenses are mov able, semi transparent regions of interest that show the user a different representation of the information underneath the lens. They were used for such operations as magnification, blur, and previewing various image effects. Each lens represented a spec ific effect. If the user wanted to combine effects, two lenses could be dragged over the same area, producing a combined effect in the overlapping areas of the lens. The overall purpose of the magic lens was to show underlying data in a different context o r representation. This purpose remained when it was extended from 2D into 3D [74] Instead of using squares and

PAGE 33

33 circles to affect the underlying data on a 2D plane, boxes and spheres were used to give an alte rnate visualization of volumetric data. In m ixed and a ugmented reality these lenses have again been extended to become hand held tangible user interfaces and display devices as in [43] With an augmented reality lens, the user can look through a lens and see the real world augmented with virtual information within the lens region of interest ( i.e., defined by a fiducial marker or an LCD screen of a tablet pc based lens). The lens acts as a filter or a window for the real world and is shown in perspective with the users first -person perspective of the real world. Thus, the MR/AR lens is similar to the original 2D magic lens metaphor, but has been implemented as a 6DOF tangible user interface instead of a 2D graphical user interface. As stated, all the mixed simulators presented in this study feature magic lenses as their primary display. 2.2.5 V irtual and Mixed Reality for Review of Past Experiences Much of the aforementioned technology has been used in the development of debriefing applications ( i.e., reviews of past experiences). Our collocated a fter a ction review (AAR) system (Chapter 5) build s upon th is previous work. IPSVis [61] is an After Action Review system geared towards Interpersonal Simulation, specifically Human -Virtual Human interaction. Medical students use IPSVis for review of physicianpatient int erviews using virtual human patients. IPSVis was shown to impact students self perception. There has also been relevant work in using past expert interactions to direct t raining. Chua et al. [11] created a system to train students with expert Tai chi movements from a user controlled, first person persp ective. Sielhorst et al [65] provided methods to synchronize trajectories of 3D objects for user feedback and review. T w o algorithms were evaluated and Dynamic Time Warp, a widely used algorithm in speech analysis, was found to be the most effective. This enabled users to visualize two time invariant trajectories of 3D objects in -situ with the real world. Their application was a n infant

PAGE 34

34 delivery simulator in which users needed to visualize multiple forceps interactions of multiple users for comparison. Our work, mixed simulator -based colloca ted after action review, extends this previous work. We utilize mixed reality to : 1) enable self -debriefing ( e.g., review without an expert present) with abstract simulations and 2) visualize large amounts of data in an immersive setting ( e.g., inside the training area). 2.3 Visualization One purpose of visualization is to take large amounts of data ( e.g., numerical or ordinal) that are not humanly perceptible and render the data in a visual representation that is humanly perceptible. Fundamentally, visuali zation seeks to enhance the effectiveness and efficiency of human perception with computer graphics. Much of the relevant previous work in this area centers on enhancing visualization with virtual and mixed reality immersive displays. Our novel immersive v isualization system for collocated after action review ( Chapter 5) extends the previous immersive visual iz ation work presented in this section. 2.3.1 Immersive Visualization Focus + context [64] is a visualization paradigm that seeks to draw the users attent ion to certain important objects in a visually complex scene, while also representing the focus objects relationships to the other objects in the scene. Kalkofen et al [30, 47] present a method and implementation of enabling the focus + context in MR Their approach consists of modeling the scene graph with a markup language that includes visualization/compositing routines, context families ( e.g., objects that should be viewed at the same time), and rendering order. Based on this graph, the softwa re uses the GPU to process and render the composited images. These composites that take into account focus ( e.g., the importance of objects) and context ( e.g., which objects should be augmented and displayed together) can enable xray visualizations, for e xample, that potentially enhance visual perception of the environment. The context of these

PAGE 35

35 augmentations can also be made interactive through the use of magic lenses. Our work builds upon this previous work by creating novel mixed simulator based approach es for focus + context with overlaid abstract simulation for immersive visualization in context with the real world. 2.3.2 Benefits of Immersive Visualization Many researchers have studied the cognitive and perceptual benefits of immersive visualization. This previous work was instrumental in motivating the design of our immersive visualization system. For example, in [18] they studied how the virtual overlay of MR can be represented in various ways to facilitate the visualization of occluded surfaces ( i. e., seeing through walls). Specifically, they looked at how people perceive depth when shown a visualization of occluded surfaces. In their results, they found that participants perceived interposition of object depth dominated all other perception cues s uch as motion parallax or shading and textures. Thus, when the MR display rendered occluded surfaces to be in front of other occluded surfaces, participants always chose the front rendering, even if the perspective cue showed otherwise. Bowman et a [7] evaluated the effect of displays with various levels of immersion, resolution, and field of view and found specific benefits to spatial cognition with each type of display. For example, larger, more immersive displays decrease the users cognitive load when the user views spatially complex scenes as compar ed to smaller displays. Moreover, although MR displays are intended to enhance perception, many of these displays cause problems with basic visual perceptions like color perception and visual acuity. Livingston [40] found that the limited resolution and color representation of many of these displays limits the visual perceptions of reality (as seen throug h an MR display). The results of these studies led us to research the use of mixed simulators for immersive visualization.

PAGE 36

36 2.4 Spatial Cognition and Spatial Ability Tests Spatial cognition plays a major role in human perception and understanding. It has a direct impact on how we learn and how we perceive the complex world around us. It is vital to consider the impact of spatial cognition when designing and evaluating immersive visualization tools such as the mixed simulators. This section gives a brief background on spatial cognition and describes relevant tests used to assess the mixed simulators impact on spatial ability. 2.4.1 Working Definition Spatial cognition addresses how humans encode spatial information ( i.e., about the position, orientation and m ovement of objects in the environment), and how this information is represented in memory and manipulated internally [21] 2.4.2 Spatial Abilities at Different Scales Cognitive psychology research considers spatial cognition abilities at different scales. Each of these scales corresponds to different types of spatial challenges. For example, navigation of an environment ( i.e., as in the sketch map study conducted by Billinghurst et al. [6] ) would be considered large -scale, whereas the typical paper tests ( i.e., the Vandenberg Mental Rotations Test) are considered small -scale tests. A persons large -scale and small -scale spatial cogniti on abilities may be independent. In fact there has b een some empirical evidence in the way of factor analyses that these abilities are relatively independent [21] Thus, to broadly assess the spatial abilities of a person, the person should be given several tests, each of which assesses spatial ability at a diffe rent scale. For the purposes of our research, three tests are used to assess participants spatial cognition at three different scales: F igural, vista, and environmental. These spaces and the associated tests used in our study are outlined in the following sections. These tests were taken from the spatial cognition literature in psychology.

PAGE 37

37 For more detailed information about the tests we used, spatial ability at different scales, additional tests, and comparisons between the different tests, refer to Hegar ty et al. [21] 2.4.2.1 Figural: T he a rrow s pan test The figural scale is: S mall in scale relative to the body and external to the individual, and can be apprehended from a single viewpoint [21] To assess figural scale ability, the Arrow Span Test measures ability to maintain spatial information in working memory. The test shows participants a sequence of 2D arrows, shown one by one and randomly in one of 8 orientations (upright and increments of 45 degrees from upright). Participants are asked to recall the sequence from memory and type the answers using the numeric keypad. Participants are shown 15 sequences of 2D arrows. As they progress through the 15 sequences, the number of arrows in each sequence gradually increases from 2 to 6 arrows. For each arrow orientati on recalled correctly, the participant gains one point. With 60 total arrows shown, there is a maximum possible score of 60. 2.4.2.2 Vista: The p erspective taking a bility test The vista scale is: P rojectively as large or larger than the body, but can be vi sually apprehended from a single place without appreciable locomotion. [21] To assess Vista scale ability, participants in our study took the Perspective Taking Ability Test, which measures ability to encode, maintain, and transform spatial representations at t he vista scale of space. Four objects (a cup, a keyboard, a broom, and a suitcase) were placed at the center of each wall of a real, square 8m x 8m room. Participants were told to learn the relative locations of each of the four objects. Participants are given as much time as needed but generally do not take longer than ~3 minutes. Then, using a computer, they are asked several questions about the objects locations. For example You are standing in front of the cup and facing the

PAGE 38

38 center of the room. Point to the keyboard The participant uses arrow keys on a keyboard to indicate the direction they are pointing. Their score is based upon how many objects are pointed to correctly and how long it takes (milliseconds) to enter each answer 2.4.2.3 Environmental : N avigation of a virtual environment Environmental space is: L arge in scale relative to the body and contains the individual. Environmental tests usually include locomotion ( i.e., navigating through a maze) [21] To assess environmental scale ability, partici pants navigate a virtual environment. This test assesses sense of direction. The interaction is much like a first -person -shooter video game. Participants sit at a desktop computer and use the keyboard and mouse to navigate through virtual hallways. First, participants navigate a square shaped hallway in order to learn the interface. Then they move on to a winding hallway, where there are 4 objects along the path. Participants traverse the hallway twice. On the first traversal, the objects are pointed out to the participant. On the second traversal, at each object the participant is asked to estimate distance and direction to two other objects. For direction, they point in the direction using a dial marked with 360 degrees tick marks. There are 8 distance and 8 direction estimates made in all. For distance scoring, distance estimates are correlated to actual distances. The correlation coefficient is used as the score. For directional scoring, the mean absolute difference (in degrees) of the estimated direction s and actual directions is computed. At the end of the test, participants are asked to sketch a map of the environment to scale. These sketch maps are graded on a point scale. Zero is a perfect score. One point is added to the score for each object that is misplaced or left out. Additionally, one point is added to the score for each section of the path that is (a) a wrong turn, (b) an additional hallway section that does not belong, or (c) a hallway segment left out that does belong. [21]

PAGE 39

39 2.5 Scaffolded Learni ng Mixed simulators can be considered technology-mediated tools for education. This section describes the method ( i.e., scaffolding) from educational psychology literature that influenced the design of the mixed simulator. First scaffolding is defined. The n relevant types of scaffolds are described. 2.5.1 Definition Scaffolding is guided instruction that fades over time as t he learner gains competence [51] The term scaffolding is an analogy to building construction. The building is the learners understanding of a concept and the scaffolding is the support to bring the learner to competency. Just as in construc tion, the amount of scaffolding is reduced as the learner comes closer to understanding the concept. This scaffolding approach has been shown to increase learners independent learning ability. As the scaffolding is decreased, learners naturally take more responsibility in understanding the concepts and rely less on the scaffolding. 2.5.2 Technology -Mediated Scaffolding Traditionally, scaffolding refers to teacher interaction with students, but more recently the concept has been applied to technology-mediat ed scaffolding [26, 51] (e.g., MR -based scaffolding). Technology-mediated scaffolding involves designing interfaces and systems that can fade as the user gains more understanding about how t o interact with the system and about the concepts being taught. For example, in [26] a group of high school students were given scientific modeling software ( e.g., software that aided them in creating visual models of weather patterns and fluid dynamics). When the students started using the software there were scaffolds in the form of pop ups that would offer to help the user understand how to use the software effectively. Over time and based on user input, these interface hints became less frequent. This scaffolding was shown

PAGE 40

40 to be very effective and increased their independent exploration of more advanced features in the software. 2.5.3 Scaffolding w ith Abstract and Concrete Representations In the learni ng process [34, 35] it can be beneficial to scaffold learning [46, 69] with both abstract and concrete representations of a concept. Concrete representations ( i.e., the anesthesia machine) and abstract representations ( i.e., the VAM) offer the student different types of knowledge and provide different kinds of scaffolding. Psychology research has shown that scaffolding with multiple representations that fade from abstract to concrete is beneficial in learning di fficult concepts [27, 28, 34, 35, 66] Note that we are considering real devices, such as an anesthes ia machine, to be concrete representations. Concrete Representations offer concrete experience: tangible, felt qualities of the world, relying on our senses and immersing ourselves in concrete reality. [35] For example, a real anesthesia machine, a concrete representation, is effective for teaching procedural concepts and psychomotor skills, such as how to physically interact with a specific anesthesia machine. It also provides tactile feedback such as the feel of the fluted knob for setting oxygen flow. Concrete representations preserve natural spatial and metric skills such as orientation, relative position, shape, and size. Physical simulators and virtual reality simulators that attempt to achieve photorealism are also considered to be concrete representat ions. Abstract Representations offer abstract conceptualization: thinking about, analyzing, or systematically planning, rather than using sensation as a guide. [35] For example, the VAM, an abstract representation, teaches students about intangible concepts such as invisible gas flow, which can be applied to many anesthesia machine models.

PAGE 41

41 Currently, students train with both the VAM and the real anesthesia machine representations to gain a broader understanding of anesthesia machines. Figure 2 1 The vir tuality continuum [Reprinted with permission from Milgram, P. 2009. P. Milgram and F. Kishino, "A Taxonomy of Mixed Reality Visual Displays," IEICE Transactions on Information Systems vol. 77, no. 12, pp. 13211329, 1994.]

PAGE 42

42 CHAPTER 3 DESIGN AND IMPLEME NTATION OF A MIXED S IMULATOR A simulation modeler must consider how a model ( e.g., a dynamic model ) is related to the corresponding physical phenomenon. Understanding this relationship is integral to the simulation model creation process. For example, to c reate a simulation based on a functional block model of a real machine, the modeler must know which machine parts each functional block represents the modeler must understand the mapping from the real phenomenon to each functional block. The modeler perfo rms a mental geometric transformation between the abstract components of the model and the concrete components of the real phenomenon. The ability to effectively perform this transformation is likely dependent on spatial ability ( e.g., the ability to menta lly rotate objects), which is highly variable in the general population. Modelers or learners with low spatial cognition may have difficulty mentally mapping an abstract model to its corresponding physical phenomenon. The focus of this chapter is on mixed reality (MR) -based approaches for visualizing the mapping between an abstract simulation model and the corresponding physical phenomenon. Understanding and creating these mappings represents a challenging task, since for diagram -based dynamic models, compl ex physical and spatial relationships are often simplified or abstracted away. Through this abstraction, the mapping from the model to the corresponding physical phenomenon often becomes more ambiguous for the user. For example, consider a web enabled, diagram -based, dynamic, tran sparent reality [36] model of an anesthesia machine (Figure 3 1), called the Virtual Anesthesia Machine (VAM) that is implemented in Director (Adobe) and used via stan dard internet browsers Transparent reality, as used in the VAM, provides anesthesia machine u sers an interactive and dynamically accurate visualization of internal structure and processes for appreciating how a generic, bellows ventilator anesthesia

PAGE 43

43 machine operates. To facilitate understanding of internal structure and processes through visualiz ation, (a) the pneumatic layout is streamlined and its superficial details are removed or abstracted, (b) pneumatic tubing is rendered transparent, (c) naturally invisible gases like oxygen and nitrous oxide are made visible through color -coded icons repre senting gas molecules (color coding according to 6 user -selectable, widely adopted medical gas color code conventions) and (d) the variable flow rate and composition of gas at a given location are denoted by the speed of movement and the relative proportio n of gas molecule icons of a given color, respectively. Compared to a photorealistic simulation that uses a simulation engine identical to VAM, the VAM has been shown to enhance understanding of anesthesia machine function [14, 15] Students are expected to learn anesthesia machine concepts with the VAM, and apply those concepts when using the real machine. To apply the concepts from the VAM when using a real machine, students may need to identify the mapping between the components of the VAM (the dynamic model) and the components of a real anesthesia machine. For example, as shown in Figure 3 2, the green knob of A (the gas flowmeters) controls the amount of oxygen flowing through the system while the blue knob controls the amount of nitrous oxide ( N2O ), an anesthetic gas. These gases flow from the gas flowmeters and into B, the vaporizer. The arrow s show how the real c omponents are mapped to the VAM. Note how the spatial relationship between the flowmeters (A) and the vaporizer (B) is laid out differently in the VAM than in the real machine. That is, the flowmeters have been spatially reversed in the VAM. In the VAM, t he N2O flowmeter is on the right and the O2 is on the left. Conversely, for the anesthesia machine, the N2O flowmeter is on the left and the O2 flowmeter is on the right. In all anesthesia machines the functional relationship between the flowmeters is alwa ys the same, but the physical meters and

PAGE 44

44 knobs may be laid out differently in different machines due to engineering and ergonomic constraints. The purpose of the spatial reversal in the VAM is to make the gas flow dynamics easier to visualize and understan d. Because the VAM simplifies these spatial relationships, understanding the functional relationships of the components is also easier ( i.e., understanding that mixed O2 and N2O gases flow from the gas flowmeters to the vaporizer). However, this simplific ation can create difficulties for students when spatially mapping the VAM model to the anesthesia machine. For example, educators have noted that some students training with the VAM may memorize that turning the left knob increases the O2. Then, when the s tudents interact with the real machine, they will accidentally increase the N2O instead. This action could lead to negative training transfer and could be potentially fatal to a patient. Although understanding the mapping between the VAM and the anesthesia machine is critical to the anesthesia training process, mentally identifying the mapping is not always obvious. This mapping problem may be connected to spatial ability, which can be highly variable over a large user base. This research presents data that a mixed reality simulation could offer a visualization of the mapping to help the user visualize the relationships between the diagram based dynamic model ( e.g., the VAM) and the corresponding real phenomenon ( e.g., the real anesthesia machine). This cha pter describes a method of integrating a diagram -based dynamic model, the physical phenomenon being simulated, and the visualizations of the mapping between the two into the same context. To demonstrate this integration, this chapter presents the Augmente d Anesthesia Machine (AAM), a Mixed Reality based system that combines the VAM model ( i.e., an abstract simulation) with a real anesthesia machine ( i.e., a physical simulation) ( Figure 3 3). Note that we consider the real machine to be a physical simulation. Although the machine is a

PAGE 45

45 working anesthesia machine with real gas flows, which some readers may not consider a simulation, the corresponding training scenario is a simulation. That is, in training there is no real human life at stake. To integrate abs tract and physical simulation, the AAM first spatially reorganizes the VAM components to align with the real machine. Then, it superimposes the spatially reorganized components into the users view of the real machine ( Figure 3 1). Finally, the AAM synchronizes the abstract simulation with the real machine, allowing the user to interact with the diagram -based dynamic model (VAM model) through interacting with the real machine controls such as the flowmeter knobs. By combining the interaction and visualizati on of the VAM and the real machine, the AAM helps students to visualize the mapping between the VAM model and the real machine. This chapter is organized as follows. In section 3.1, we describe the cognitive challenges that students encounter when transfer ring knowledge learned with an abstract model of an anesthesia machine to the real -world physical machine. In section 3.2, we address these spatial challenges with a Mixed Reality -based approach; and in section 3.3, we describe the implementation in detail. Note that parts of this work were published in IEEE Virtual Reality 2008 [58] and the Journal of C omputers and Graphics 2009 [59] 3.1. The Virtual Anesthesia Machine ( VAM ) and the Anesthesia Machine The purp ose of the described research in these sections is to combine real phenomena ( e.g., a physical simulation) with a corresponding abstract, transparent reality model with the ultimate goal of improving understanding of complex concepts. As an example impleme ntation of this method, we explore an application in anesthesia education. In this implementation, students interact with a real anesthesia machine while visualizing the model in context with the real machines components. Before detailing the methods and implementation of mixed simulators,

PAGE 46

46 this section describes how students interact with the real machine and the model the VAM in the current training process. The following example shows how students interact with one anesthesia machine component the gas flowmeters and describes how students are expected to mentally map the VAM gas flowmeters to the real gas flowmeters. 3.1.1 The Gas Flowmeters in the Real Anesthesia Machine A real anesthesia machine anesthetizes patients by administering anesthetic gases into the patients lungs. The anesthesiologist monitors and adjusts the flow of these gases to make sure that the patient stays safe and under anesthesia. The anesthesiologist does this by manually adjusting the gas flow knobs and monitoring the gas flowmeters as shown in Figure 3 4. The two knobs in the magnified picture control the flow of gases in the anesthesia machine and the bobbins (floats) in the flowmeters above them move along a graduated scale to display current flow rate. If a user turns the color -coded knobs, the gas flow changes and the bobbins move to indicate the new flow rate. 3.1.2 The Gas Flowmeters in the VAM The VAM models these gas flow control knobs and bobbins with 2D icons ( Figure 3 5) that resemble the gas flow knobs and bobb ins on the real machine. As with the real machine, the user can adjust the gas flow in the VAM by turning the knob icons in the appropriate direction (i.e., clockwise to decrease and counterclockwise to increase). Since the VAM is a 2D online simulation, the user clicks and drags with the mouse in order to adjust the knob icons. When the user turns a knob, the rate of gas flow changes in the visualization; animated color -coded gas particles ( e.g., blue particles = N2O ; green particles = O2) change their spe ed of movement accordingly to represent the magnitude of the flow rate. These gas particles and the connections between the various machine components are invisible in the real machine. As a transparent reality simulation, the VAM models the invisible gas flow, hidden, internal connections,

PAGE 47

47 interaction, and the appearance of the real gas flowmeters. Within this modeling, there is a mapping between the real machines gas flowmeters and the VAMs. Students are expected to mentally map the concepts learned wit h the VAM ( i.e., visible gas flow) to their interactions with the real machine. Anesthesia educators have noted that because the VAM and the real machine are complex and spatially organized differently, a subset of students ( e.g., perhaps those with low sp atial ability) may have difficulty mentally mapping the VAM to the real machine. This mapping difficulty may inhibit their understanding of how the real machine works internally. In order to resolve this issue, this research proposes to combine the visuali zation of the VAM with the interaction of the real machine. Methods to perform this combination are presented in the following section. 3.2 Mixed Simulator Design Methodology A mixed simulator blends abstract simulation with a corresponding physical simula tion, such as combining the VAM with a real anesthesia machine. To create this combination, we use MR to contextualize the VAM with the real machine. Contextualization involves two criteria: (1) Registration: spatially superimpose parts of the abstract mod el over the corresponding parts of the physical simulator (or vice versa) and (2) Synchronization: temporally synchronize the abstract simulation with the physical simulation. T his section describes two methods through the example of mapping the VAM simul ation to the anesthesia machine. The purpose of these two specific methods is to help students orient themselves to the real machine after learning with the VAM. These methods have also been extended with addition al visualizations described in 3.2.1.3 and 3.2 .3. The students may start with the VAM, and proceed through one or both of the following contextualization methods before learning with the anesthesia machine. Through interaction with the AAM, students may

PAGE 48

48 better understand the mapping from the VAM t o the anesthesia machine and enhance their overall knowledge of anesthesia machines (see the user study in Chapter 4). 3.2.1 Contextualization Method 1: Real Machine -Context (AAM -MR) One way to visualize the mapping between a diagram -based dynamic model an d real phenomenon is to spatially reorganize the model layout and superimpose the models components over the corresponding components of the real phenomenon itself or a physical simulation. Using this method, the components of the VAM ( e.g., gas flowmeter s icon, vaporizer icon, ventilator icon) are spatially reorganized and superimposed onto the context of the real machine ( Figure 3 6). Each model component is repositioned in 3D to align with the corresponding real component. Through this alignment, the us er is able to visualize the mapping between the VAM and the real machine. For example, consider contextualizing the VAMs gas flowmeters with the real anesthesia machines gas flowmeters ( Figure 3 6). This requires us to overlay computer graphics (the VAM gas flowmeters) on the users view of the real world. In effect, the users view of the real gas flowmeters is combined with a synthetic view of the VAM gas flowmeters. This in -context juxtaposition of the VAM gas flowmeters and the real gas flowmeters i s designed to help users visualize the mapping between the VAM model and the real machine. To meet the registration criterion of contextualization, this method cuts out the 2D model components, and pastes them over the corresponding parts of the real machine. Once this process is completed, the VAM components can be visualized superimposed over the real machine as seen in Figure 3 6. This overlay helps users to visualize the mapping between the real machine and the simulation model. Note that with both contextualization methods presented here, the underlying functional relationships of the simulation model stay the same. For example, in this method, although the reorganized VAM components no longer maintain the original models spatial relationships, th ey

PAGE 49

49 do maintain the same functional relationships. In the AAM -MR, the gas particle visualization still flows between the same components, but the flow visualization takes a 3D path through the real machine. 3.2.1.1 Visualization with the magic l ens To visua lize the superimposed gas flowmeters, users look through a tracked 6DOF magic lens ( Figure 3 8). The lens allows users to move freely around the machine and view the simulation from a first person perspective, thereby augmenting their visual perception of the real machine with the overlaid VAM model graphics. The relationship between the users head and the lens is analogous to the OpenGL camera metaphor. The camera is positioned at the users eye, and the projection plane is the lens; the lens renders the VAM simulation directly over the machine from the perspective of the user. Through the lens, users can view a first -person perspective of the VAM model in context with a photorealistic 3D model of the real machine. The 3D machine model appears on the lens in the same position and orientation as the real machine, as if the lens was a transparent window (or a magnifying glass) and the user was looking through it. 3.2.1.2 Interaction In this Real Machine context, users interact with the simulation through th eir interactions with the real machine, i.e., the anesthesia machine acts as tangible user interface. To facilitate this interaction style, the interface and the simulation must be synchronized. For example, the gas flowmeters model (specifically the grap hical representation of the gas particles flow rate and the flowmeter bobbin icon position) must be synchronized with the real machine. That is, changes in the rate of the simulated gas flow must correspond with changes in the physical gas flow in the rea l anesthesia machine. In the AAM, if a user turns the N2O knob on the real machine to increase the real N2O flow rate ( Figure 3 9), the simulated N2O flow rate will

PAGE 50

50 increase as well. Then the user can visualize the rate change on the magic lens interactively, as the blue particles (icons representing the N2O gas molecules) will visually increase in speed until the user stops turning the knob. Thus, the real machine is an interface to control the simulation of the machine. That is, the transparent reality model visualization ( e.g., visible gas flow and machine state) is synchronized with the real machine. With this synchronization, users can observe how their interactions with the real machine affect the model in context with the real machine. The overlai d diagram -based dynamic model enables users to visualize how the real components of the machine are functionally and spatially related, thereby demonstrating how the real machine works internally. This coupling of the overlaid VAM visualization and real ma chine interaction may help users to more effectively visualize the mappings between the VAM model and the real machine. Users get to experience the real location, tactile feel and resistance of the machine controls. For example, the O2 flowmeter knob is f luted while the N2O flowmeter knob is knurled to provide tactile differentiation. 3.2.1.3 HUD visualization In the case of anesthesia machine training, students may become familiar with the VAM before ever using the real machine. Thus, since these students are already familiar with the 2D VAM, this Real Machine Context (AAM -MR) methods spatial reorganization of the VAM could be disorienting. To address this disorientation, a heads up display (HUD) was implemented ( Figure 3 10). The HUD shows the familiar V AM icons, which are screen aligned and displayed along the bottom of the lens screen; each icon has a 3D arrow associated with it that always points at the corresponding component in the anesthesia machine. Thus, if the user needs to find a specific VAM co mponents new location in the context of the anesthesia machine, the user can follow the arrow above the HUD icon and easily locate the spatially

PAGE 51

51 reorganized VAM component. Once the user has located all the reorganized VAM components, the user can optional ly press a GUI button to hide the HUD. 3.2.2 Contextualization Method 2: VAM -Context Another way to visualize the mapping between a real phenomenon and its model is to spatially reorganize the real phenomenon itself so that its components are superimposed into the context of the dynamic model. Using this method, the components of the real machine ( e.g., the gas flowmeters, the vaporizer, the ventilation bag etc) are reorganized and superimposed into the context of the VAM simulation ( Figure 3 11). Each real component is repositioned to align with the corresponding simulated component in the VAM. Through this alignment, the user is able to visualize the mapping between the VAM and the real machine. However, in many cases, it is not possible to physically deco nstruct a real phenomenon and spatially reorganize its various parts. For example, many components, such as the gas flowmeters, cannot be disconnected or moved within the anesthesia machine. Rather, the lens renders a high resolution pre -made 3D scale -mod el of the real machine. This 3D model is readily reconfigurable by performing geometric transformations on its various components. Then, the software can spatially reorganize the real machines 3D model to align with the components of the VAM, thereby visualizing the mapping between the two. 3.2.2.1 Visualization This method takes a 3D anesthesia machine model and reorganizes it on the 2D plane of the VAM. This mode is different from the contextualization described in the gas flowmeters contextualization ex ample in which the user looked through the magic lens like a transparent window. In VAM -context mode, the tablet PC lens is just a hand held screen that displays a 2D simulation from a stationary viewpoint, rather than acting as a see -through window. Essentially,

PAGE 52

52 this mode enables the user to control the 2D VAM visualization through interaction with the real anesthesia machine ( Figure 3 12). 3.2.2.2 Interaction The VAM Context interaction style stays the same as in the Real Machine context (AAM MR). Users c an interact with the real machine as an interface to the simulation model. To interact with a specific simulation component, users must first identify the superimposed real machine component on the lens, and then interact with the real component on the rea l machine. This maintains the second criterion of contextualization, synchronizing the simulation with the real phenomenon, and allows users to see how their real machine interactions map to the context of the VAM model. 3.2.3 Transformation Between VAM -Co ntext and Real Machine -Context Choosing the appropriate contextualization method for a given application is not trivial. In many cases, users might prefer to interactively switch between two methods. If users have the ability to switch between methods, it is beneficial to display a visual transformation between the contextualizations. To create a smooth transition between VAM -Context and Real Machine Context(AAM MR), a geometric transformation can be implemented. The 3D models (the machine, the 3D VAM icons) animate smoothly between the differing spatial organizations of each contextualization method. This transformation morphs from one contextualization method to the other with an animation of a simple geometric transformation ( Figure 3 -13). Consider converting from Real Machine Context (AAM -MR) to VAM Context, shown in Figure 3 13. Initially, in Real Machine Context the 3D gas flowmeters model are integrated with the 3D model of the real machine. Then the user presses a GUI button on the lens to start the transformation and the 3D model of the gas flowmeters translates in an animation. The 3D gas

PAGE 53

53 flowmeters geometric model moves ( Figure 3 13 from A to B to C to D ) to its corresponding position behind the gas flowmeters icon in the VAM ( Figure 3 13 D ). On ce the transformation into VAM Context is complete, the simulation visualization becomes screen aligned ( i.e., the lens is no longer tracked and displays the simulation in 2D). Similarly, to transform the gas flowmeters from VAM Context to Real Machine -Con text, the previous transformations are inverted. These transformation animations help to demonstrate the mappings between the real machine and the VAM model, thereby offering students a better understanding of the linkage between the VAM model and the AAM. This could help them better apply their VAM knowledge in the context of the real anesthesia machine. Transformation Implementation: To facilitate this transformation between the two methods, an explicit mapping between the component positions in each meth od must be implemented. One way to implement such a mapping is with a semantic network. The semantic network is a graph in which there exists a series of links or edges between the components in each method. The structure of the semantic network is simpl e, although, there are many components that must be linked. Each 3D model of a real machine component ( i.e., the gas flowmeters) is linked to a corresponding VAM icon. This icon is linked to a position in the VAM and a position in the real machine. Likewis e, the gas particle visualizations also have links to positions in both the real machine and the VAM. When the user changes the visualization method, the components and the particles all translate in an animation to the positions contained in their semanti c links. These links represent the mappings between the real machine and the VAM; these links also represent the mappings that exist between the two visualization methods. The animation of the transformation visualizes the mappings between the components i n each method. For more examples of using semantic networks in mixed simulators, see Chapter 6.

PAGE 54

54 3.3 Implementing a Mixed Simulator This section will describe the engineering challenges encountered when implementing a mixed simulator such as the AAM. This s ection will explain the engineering challenges of (1) visual contextualization ( i.e., displaying the model component in context with the real component), (2) interaction contextualization ( e.g., interaction with the real phenomenon affects the state of the model) and (3) integrating the tracking and display technologies that enable contextualization ( Figure 3 14). This section will outline our approach to addressing these challenges in the AAM implementation. Since the AAM is an educational tool, our approa ch focuses on maximizing the educational benefits. The approach presented in this section is conceptually built around the educational goal of scaffold students learning and enable them to more effectively transfer and apply their VAM knowledge to the real anesthesia machine. 3.3.1 Visual Contextualization Our approach to visual contextualization ( i.e., visualizing the model in the context of the corresponding physical phenomenon) is to visually collocate each diagrammatic component with each anesthesia machine component. The educational purpose of this visual collocation is to help students to apply their VAM knowledge in a real machine context (and vice versa). The main engineering challenge here is how to display two different representations of the same object (e.g., the 3D anesthesia machine and the 2D VAM) in the same space. Our approach to visual contextualization addresses this challenge. Without the AAM, students must mentally transfer the VAM functionality to real anesthesia machine components. Thi s may be a difficult transformation for some students ( e.g., students with low spatial ability) because the VAM is in 2D while the anesthesia machine is in

PAGE 55

55 3D with different spatial relationships. Contextualization aims to aid students in addressing this c hallenge. To meet this challenge, our approach involves: (1) transforming the 2D VAM diagrams into 3D objects ( e.g., a textured mesh, a textured quad, or a retexturing of the physical phenomenons 3D geometric model) and (2) positioning and orienting the transformed diagram objects in the space of the corresponding anesthesia machine component ( i.e., the diagram objects must be visible and should not be located inside of their corresponding real components 3D mesh). In our approach ( Figure 3 15) each VA M component is manually texture -mapped to a quad and then the quad is scaled to the same scale as the corresponding 3D mesh of the physical component. Next each VAM component quad is manually oriented and positioned in front of the corresponding real components 3D mesh specifically, the side of the component that the user looks at the most. For example, the flowmeters VAM icon is laid over the real flowmeter tubes. The icon is placed where users read the gas levels on the front of the machine, rather t han on the back of the machine where users rarely look. Note that this method has been shown to be an effective contextualization method, but there are many other approaches to this challenge ( e.g., texturing the machine model itself or using more complex 3D models of the diagram rather than texture mapped 3D quads) that we may investigate in the future. 3.3.1.2 Visual overlay Once the problem of transforming a 2D diagram to a 3D object is addressed, another challenge is how to display the transformed diagr am in the same context as the 3D mesh of the physical component so that the student can perceive it and learn from it, regardless of spatial ability. For example, the diagram and the physical components mesh could be alpha blended together. Then the user would be able to visualize both the geometric model and the

PAGE 56

56 diagrammatic model at all times. However, in the case of the AAM, alpha blending would create additional visual complexity, that could be confusing to the user and hinder the learning experience. For this reason, the VAM icon quads are opaque. They occlude the underlying physical component geometry. However, since users interact in the space of the real machine, they can look behind the tablet PC to observe machine operations or details that may be occluded by VAM icons. 3.3.1.3 Simulation s tates and d ata f low There are many internal states of an anesthesia machine that are not visible in the real machine. Understanding these states is vital to understanding how the machine works. The VAM shows these internal state changes as animations so that the user can visualize them. For example, the VAM ventilator model has three discrete states ( Figure 3 16): (1) off, (2) on and exhaling and (3) on and inhaling. A change in the ventilator state will change t he visible flow of data ( e.g., the flow of gases). Similarly, the AAM uses animated icons ( e.g., change in the textures on the VAM icon quads) to denote simulation state change. To minimize spatial complexity, only one state per icon is shown at a point in time. The current state of an icon corresponds to the flow of the animated 3D gas particles and helps students to better understand the internal processes of the machine. 3.3.1.4 Diagrammatic g raph a rcs b etween c omponents Anesthesia educators note that s tudents may also have problems with understanding the functional relationships between the real machine components. In the VAM, these relationships are visualized with 2D pipes. The pipes are the arcs through which particles flow in the VAM gas flow model. The direction of the particle flow denotes the direction that the data flows through the model. In the VAM, these arcs represent the complex pneumatic connections that are

PAGE 57

57 found inside the anesthesia machine. However, in the VAM these arcs are simplified for ease of visualization and spatial perception. For example, the VAM pipes are laid out so that they do not cross each other, to ease the data flow visualization. The challenge is to transform these arcs from the 2D model to 3D objects ( Figure 3 17), wh ile making visualization (that is inherently more complex in 3D than in 2D) as easy as possible for the user. Our approach also takes steps to spatially simplify the connections. In order to aid the user in visualizing the connections, the AAMs pipes are visualized as 3D cylinders but they are not collocated with the real pneumatic connections inside the physical machine. Instead, they are simplified to make the particle flow simpler to visualize and perceive spatially. This simplification emphasizes the f unctional relationships between the components rather than focusing on the spatial complexities of the pneumatic pipe geometry. The pipes in the AAM intersect neither with the machine geometry nor with other pipes. However, in transforming these arcs from 2D to 3D, some of the arcs appear to visually cross each other from certain perspectives because of the complex way the machine components are laid out. In the cases that are unavoidable due to the machine layout, the overlapping sections of the pipes are assigned different colors to facilitate the 3D data flow visualization. These design choices are meant to enable students to visually trace the 3D flow of gases in the AAM. 3.3.1.5 The m agic l ens d isplay s ee t hrough e ffect For enhanced learning, our approa ch aims to put the diagram based, dynamic, transparent reality model in the context of the real machine using a see through magic lens. For the see through effect, the lens displays a scaled high resolution 3D model of the machine that is registered to the real machine. There are many reasons why the see though functionality was implemented with a 3D model of the machine registered to the real machine. This method was chosen over a video see -though technique (prevalent in many Mixed Reality applications) in

PAGE 58

58 which the VAM components would be superimposed over a live video stream. The two main reasons for a 3D model see through implementation are: To facilitate video -see through, a video camera would have to be mounted to the magic lens. Limitations of video c amera field of view and positioning make it difficult to maintain the magic lens window metaphor. Using a 3D model of the machine increases the visualization possibilities. For example, the parts of the real machine cannot be readily physically separated compared to the parts in a 3D model visualization. This facilitates visualization in the VAM Context method and the visual transformation between the VAM -context and Real Machine -context methods as described in the previous section. There are many other t ypes of displays that could be used to visualize the VAM superimposed over the real machine (such as see though Head Mounted Display (HMD)). The lens was chosen because it facilitates both VAM Context and Real Machine Context (AAM MR) visualizations. More immersive displays ( i.e., HMDs) are difficult to adapt to the 2D visualization of the VAM Context without obstructing the users view of the real machine. However, as technology advances, we will reconsider alternative display options to the Tablet PC. 3.3 .1.6 Tracking the m agic l ens d isplay The next challenge is to display the contextualized model to the user from a first person perspective and in a consistent space. As stated, our approach utilizes a magic lens that can be thought of as a window into the virtual world of the contextualized diagrammatic model. In order to implement this window metaphor, the users augmented view had to be consistent with their first -person real world perspective, as if they were observing the real machine through an actua l window (rather than an opaque tablet PC that simulates a window). The 3D graphics displayed on the lens had to be rendered consistently with the users first -person perspective of the real world. In order to display this perspective on the lens, the trac king system tracked the 3D position and orientation of the magic lens display and approximated the users head position.

PAGE 59

59 To track the position and orientation of the magic lens, the AAM tracking system uses a computer vision technique called outside looking in tracking ( Figure 3 18). The tracking method is widely used by the MR community and is described in more detail in [73] The technique consists of multiple stationary cameras that observe special markers tha t are attached to the objects being tracked (in this case the object being tracked is the tablet PC that instantiates the magic lens). The images captured by the cameras can be used to calculate positions and orientations of the tracked objects. The camera s are first calibrated by having them all view an object of predefined dimensions. Then the relative position and orientation of each camera can be calculated. After calibration, each camera must search each frames images for the markers attached to the l ens; then the marker position information from multiple cameras is combined to create a 3D position. To reduce this search, the AAM tracking system uses cameras with infrared lenses and retro reflective markers that reflect infrared light. Thus, the cameras see only the markers (reflective balls in Figure 3 18) in the image plane. The magic lens has three retro reflective balls attached to it. Each ball has a predefined relative position to the other two balls. Triangulating and matching the balls from at l east two camera views can facilitate calculation of the 3D position and orientation of the balls. Then this position and orientation can be used for the position and orientation of the magic lens. The tracking system sends the position and orientation over a wireless network connection to the magic lens. Then, the magic lens renders the 3D machine from the users current perspective. Although tracking the lens alone does not result in rendering the exact perspective of the user, it gives an acceptable ap proximation as long as users know where to hold the lens in relation to their head. To view the correct perspective in the AAM system, users must hold the lens approximately 25cm away from their eyes and orient the lens perpendicular to their eye gaze

PAGE 60

60 dire ction. To accurately render the 3D machine from the users perspective independent of where the user holds the lens in relation to the head, both the users head position and the lens must be tracked. Tracking both the head and the lens will be considered in future work. 3.3.2 Interaction Contextualization In addition to visually linking the VAM to the real machine components, students must also understand how the VAM components are linked with real machine interaction ( e.g., turning knobs). To address this our approach allows the user to interact with the physical phenomenon that is used as a real -time interface to the dynamic model. For example, in the AAM when the user turns the O2 knob on the real machine, the model increases the speed of the O2 particl e flow in the VAM data flow model and visualizes this increase on the magic lens. Conceptually, direct interaction with the model should conversely impact the physical phenomenon. This requires external control of the physical phenomenon ( e.g., a digital i nterface controlling an actuator that rotates the O2 knob). In the case of our particular anesthesia machine, external control is not implemented as it could interfere with patient safety. However, some user control of the unmapped parts of the VAM model i s possible ( e.g., reset the particle simulation to a starting state) and is implemented in the AAM. The main challenge here is how to engineer systems for synchronizing the users physical device interaction with the dynamic models inputs. 3.3.2.1 Using t he p hysical m achine as an i nterface to the d ynamic m odel To address the challenge of synchronizing the model with the physical device, the AAM tracking system tracks the input and output ( i.e., gas flowmeters, pressure gauges, knobs, buttons) of the real m achine and uses them to drive the simulation. For example, when the user turns the O2 knob to increase the O2 flow rate the tracking system detects this change in knob orientation and sends the resulting O2 level to the dynamic model. The model is then a ble to update the simulation visualization with an increase in the speed of the green O2 particle icons.

PAGE 61

61 A 2D optical tracking system with 4 webcams driven by OpenCV [8] is employed to detect the states of the machine ( Table 3.1). State changes of the input devices are easily detectable as changes in 2D position or visible marker area as long as the cameras are close enough to the tracking targets to detect the change. For example, to track the machines knobs a nd other input devices, retro reflective markers are attached and webcams are used to detect the visible area of the markers ( Figure 3 19). When the user turns the knob, the visible area of the tracking marker increases or decreases depending on the direct ion the knob is turned ( e.g., the O2 knob protrudes out further from the front panel when the user increases the flow of O2, thereby increasing the visible area of the tracked marker). The machines pressure gauge needle and bag are more difficult to track since retro reflective tape cannot be attached to them. Thus, the pressure gauge and bag tracking system uses color based tracking ( e.g., the 2D position of bright red pressure gauge needle). Many newer anesthesia machines have an RS 232 digital output of their internal states. To demonstrate the generality of our method, we have recently developed and AAM for two pistonbased (rather than bellows -based as previously described) anesthesia machines wit h digital output, a Drger Apollo and a Drger Primus. For example, the Apollo AAM ( Figure 3 20) has no cameras mounted to it as the Modulus II AAM (as pictured in previous figure s) did. This makes the synchronization of machine states more robust. On the Modulus II machine simply moving a camera could uncalibrate it and invalidate the data readings. In contrast, the Apollo AAM does not suffer from this problem because all of its states are output through the data port and read in through Drgers dll -based serial protocol in real time. 3.3.2.2 Pen -b ased i ntera ction To more efficiently learn anesthesia concepts, users sometimes require interactions with the dynamic model that may not necessarily map to any interaction with the physical

PAGE 62

62 phenomenon. For example, the VAM allows users to reset the model dynamics t o a predefined start state. All of the interactive components are then set to predefined start states and the particle simulation is reset to a start state as well ( e.g., this removes all O2 and N2O particles from the pipes, leaving only air particles). This instant reset capability is not possible in the real anesthesia machine due to physical constraints on gas flows. In the AAM, although the user cannot instantly reset the real gas flow, the user does have the ability to instantly reset the gas flow vi sualization. To do this, the user clicks a 2D button on the tablet screen using a pen interface. Further, the user can interact with the pen to perform other non -mapped interactions such as increasing and decreasing the field of view on the tablet. The user interacts with a 2D slider control by clicking and dragging with the pen interface. In this way, the pen serves as an interface to change the simulation visualization that may not have a corresponding interaction with a physical component. 3.3.3 Hardware This section outlines the hardware used to meet the challenges of visual and interaction contextualization. The system consists of three computers: (1) the magic lens is an HP tc1100 Tablet PC (2) a Pentium IV computer for tracking the magic lens and (3) a Pentium IV computer for tracking the machine states. These computers interface with six 30 Hz Unibrain Fire -I webcams. Two webcams are used for tracking the lens. The four other webcams are used for tracking the machines flowmeters and knobs. The anesthesia machine is an Ohmeda Modulus II. Except for the anesthesia machine, all the hardware components are inexpensive and commercial offthe -shelf equipment. 3.4 Chapter Summary This chapter presented an approach for contextualizing diagram -based dynamic mo dels with the real phenomena being simulated. If a user needs to understand the mapping between the

PAGE 63

63 dynamic model and the real phenomenon as in the case of anesthesia education, it could be helpful to incorporate a visualization of this mapping into the si mulation One way of visualizing these mappings is to contextualize the model with the real phenomenon being simulated. Effective contextualization involves two criteria: (1) superimpose the diagrammed parts of the model over the corresponding parts of t he real phenomenon (or vice versa) and (2) synchronize the simulation with the real phenomenon. As an example, a diagram -based dynamic transparent reality anesthesia machine model, the VAM, was contextualized with the real anesthesia machine that it was simulating. This combination of visualization and interaction allows the user to interact with and visualize the dynamic model ( e.g., the VAM) in context with the real world (e.g., the real anesthesia machine). To facilitate an immersive, interactive visuali zation of the mapping between the real phenomenon and the simulation, we used MR technology such as a magic lens and tracking devices. The magic lens allowed users to visualize the VAM superimposed into the context of the real machine from a first -person perspective. The lens acted as a window into the world of the overlaid 3D VAM simulation. In addition, MR technology combined the simulation visualization with the interaction of the real machine. This allowed users to interact with the real machine and vis ualize how this interaction affected the dynamic, transparent reality model of the machines internal workings. 3.5 Conclusions Interactive abstract simulation ( e.g., the VAM) is a proven way to effectively teach abstract concepts. However, educators have found that many students have difficulty applying these abstract concepts in real world scenarios. The presented mixed simulator approach addresses this educational challenge. This work shows that the merging of abstract dynamic models and their correspond ing physical phenomena can be enabled through mixed realitys merging of real and

PAGE 64

64 virtual spaces. This approach for technology -mediated scaffolding is novel, but it still must be formally evaluated to determine its cognitive impact on the user. The followi ng chapter describes this formal evaluation. Figure 3 1 The virtual anesthesia machine. A diagram based, web -enabled, transparent reality, dynamic model of a generic anesthesia machine [Reprinted with permission from Lampotang, S. 2009. University of F lorida Department of Anesthesiology The Virtual Anesthesia Machine Copyright 2009 University of Florida Retrieved February 6, 2009 from http://vam.anest.ufl.edu ]. Figure 3 2 Mapping between the physical mac hine and the VAM. Note that A) the f lowmeters and B) the vaporizer are spatially reversed in the VAM when compared to their orientation physical machine.

PAGE 65

65 A B Figure 3 3. The augmented anesthesia machine. A) The diagrammatic VAM icons are superimpos ed over a model of an anesthesia machine. B) A student uses the magic lens to visualize the VAM superimposed over the real machine. Figure 3 4 A magnified view of the gas flowmeters on the real machine.

PAGE 66

66 Figure 3 5 A magnified view of the gas flow knobs and bobbins in the VAM. Figure 3 6 The VAM is spatially reorganized to align with the real machine.

PAGE 67

67 Figure 3 7 The users view of the flowmeters in the Augmented Anesthesia Machine ( AAM ) Figure 3 8 The real view and the magic lens vi ew of the machine shown from the same viewpoint

PAGE 68

68 Figure 3 9 A user turns the N2O knob on the real machine and visualizes how this interaction affects the overlaid VAM model. Figure 3 10. The augmented anesthesia machine heads up display (HUD). The menu at the bottom of the HUD points the user in the direction of each spatially reorganized VAM component in 3D. The tubes have been removed to make the icons more visible.

PAGE 69

69 Figure 3 11. The real machine is spatially reorganized to align with the VAM

PAGE 70

70 Figur e 3 12. VAM -Context interaction. A user views how her interactions with the anesthesia machine affect the 2D VAM simulation. Figure 3 13. Geometri c transformation between the Real Machine -Context and VAM -Context

PAGE 71

71 Figure 3 14. Schematic diagram of the AAM hardware implementation. Figure 3 15. Transforming a 2D VAM component to contextualized 3D Figure 3 16. The three states of the mechanical ventilator controls.

PAGE 72

72 A B Figure 3 17. The pipes between the components represent the diagrammatic graph arcs. In A ) the VA M the arcs are simple 2D paths, whereas in B ) the AAM the arcs are transformed and reorganized in 3D Figure 3 18. A diagram of the magic lens tracking system.

PAGE 73

73 Table 3 1 Methods of tracking various machine c omponents Machine Component Tracking Method Flowmeter knobs IR tape on knobs becomes more visible as knob is turned. IR webcam tracks 2D area of tape. ( Figure 6.5) APL Valve Knob Same method as flowmeters. Manual Ventilation Bag Webcam tracks 2D area of the bags color. Airway Pressure Gauge Webcam tracks 2D position of the red pressure gauge needle. Mechanical Ventilation Toggle Switch Connected to an IR LED monitored by an IR webcam. Flush Valve Button Membrane switch on top of the button and c onnected to an IR LED monitored by and IR webcam Manual/Mechanical Selector Knob 2D position of IR tape on toggle knob is tracked by an IR webcam. Figure 3 19. T he 2D tracking output for the anesthesia machines knobs and buttons. The user is turning a knob.

PAGE 74

74 Figure 3 20. The augmented Apollo anesthesia machine. A ) a VAM simulation of a s imilar machine to the Apollo. B) The Apollo AAM. C) a close up of the contextualized Apollo AAM piston ventilator.

PAGE 75

75 CHAPTER 4 EVALUATION OF A MIXE D SIMULATOR The main differences between a mixed simulator approach ( e.g., the AAM) and other current training simulation approaches are the types of interfaces, information representations (e.g., abstract or concrete), and displays. To evaluate and investigate the potent ial learning benefits specific to mixed simulators, we compared the AAM to 4 other types of simulation with varying interfaces, information representations, and displays. Specifically, we conducted a study in which 130 psychology students were given one hour of anesthesia training using one of the following 5 simulations (outlined in Table 4 1): (1) the VAM, (2) a stationary desktop version of the AAM with mouse keyboard interaction (AAM D), (3) the AAM mixed simulator (AAM MR), (4) the physical anesthesia machine with no additional simulation (AM R) and (5) a 2D interactive, desktop PC version of a photorealistic anesthesia machine with mouse -based interaction (AM D). The participants were later tested with respect to spatial ability, gas flow visualizatio n ability, training transfer, and their acquired knowledge of anesthesia machines. By comparing user understanding in mixed simulations and these other types of simulations, we aimed to determine the educational and cognitive benefits of mixed simulators. Spatial cognition deals with how humans encode spatial information ( i.e., about the position, orientation and movement of objects in the environment), and how this information is represented in memory and manip ulated internally [21] (see Chapter 2). One of the expected benefits of mixed simulators is an improved understanding of spatial relationships between the diagram -based dynamic model and the physical device. In the study, we hypothesized that a mixed simulators contextualized diagram -based dynamic model would compensate for users low spatial cognition more effectively than other types of models ( e.g., the abstract VAM model) and improve overall training -transfer to real world scenarios by scaffolding the users

PAGE 76

76 understanding of abstract and concrete conc epts. Note that parts of this chapter w ere published in 3DUI 2008 [60] and the Journal of Comp uters and Graphics in 2009 [59] 4.1 Hypotheses H1: A users a bstract concept understanding is correlate d to the simulations level of information abstraction. H2: M ixed simulator s scaffold learning transfer to the real machine by enabling the merging of abstract and concrete knowledge. H3: M ixed simulator s compensate for low spatial cognition. 4.2 Popu lation and Study Environment There were 130 participants in this study. All the participants were college students in an Introductory Psychology class. Students in this class are required to participate in a number of studies for credit in the course. Thus all the participants received credit for their participation in the study. The study protocol was approved prior to data collection by the University of Florida IRB (#2007 U 688). The study was conducted in a quiet, air -conditioned room. In each study s ession, there was one participant and one investigator in the room for the duration of the session. 4.3 Study Conditions This study was conducted over two academic semesters. Because some additional metrics were added in the second semester, the VAM group and the AAM -MR group conditions were repeated, which is why these conditions include more participants. This section explains the details of each experimental condition. Each participant was taught anesthesia concepts using one of the following simulations. Each of these simulations varies in information representation, interface, or display (Table 4 1) By comparing these simulations, we investigated how information representation, interface, and display affect learning outcome and spatial cognition.

PAGE 77

77 4.3. 1 VAM Group The VAM group (n=38) was trained using the VAM. They visualized and interacted with the VAMs abstract representation using a desktop computer and a mouse. The details of the VAM are explained in Chapter 3 4.3.2 AAM -MR Group The AAM -MR group ( n=39) was trained using a mixed simulator AAM -MR from Chapter 3 Participants used a tracked, 6DOF magic lens to visualize an abstract simulation superimposed over a model of the real machine, which was registered to the real machine itself. To interact with the simulation, participants simply adjusted the real anesthesia machine controls. 4.3.3 AAM -D Group The AAM D group (n=18) was trained using a desktop version of AAM -MR Participants visualized the same abstract simulation superimposed over a scal e machine model, but they viewed it on a desktop computer. Participants used a mouse and keyboard to interact with the abstract simulation and to navigate the 3D space. 2D interaction with the simulation was identical to the VAM but the 3D rendering was id entical to the magic lens. The purpose of this condition was to discover the benefits of the magic lens and real machine interaction by comparing this group to AAM -MR. 4.3.4 AM -R Group The AM R group (n=20) was trained using a real anesthesia machine with no additional visualizations or simulation. They physically interacted with the controls ( e.g., physical knobs and buttons) of the anesthesia machine and observed the meters and gauges. The details of the Anesthesia Machine can be found in Chapter 3.

PAGE 78

78 4.3. 5 AM -D Group The AM D group (n=15) was trained using a desktop-based photorealistic simulation of an anesthesia machine ( i.e., the same real anesthesia machine that group AM R and AAM MR used). The AM D simulation is similar to an interactive video. Partic ipants used the mouse to interact with the anesthesia machine simulation from a fixed viewpoint. The purpose of this condition was to investigate the benefits of haptics in the anesthesia machine and determine if the same benefits exist in the AAM -MR group. 4.4 Study Procedure For each participant, the study was conducted over a period of two consecutive days to minimize the contribution of superficial and short term learning ( i.e., memorization) to performance. The first day included several spatial cognit ive tests and an anesthesia machine training module. The second day included 2 tests on anesthesia machines: a written test and a hands -on test with the real machine. The second day also included several questionnaires about subjective opinions of the lear ning module and personal information ( i.e., computer usage and experience, GPA etc). Prior to arriving to the study, participants were unaware of all the details of the study ( i.e., they did not know it was about anesthesia machine training). When they ar rived, they were given an informed consent form that gave them an overview of the study procedure. The procedure is as follows. 4 .4.1 Day 1 (~90 Minute Session) 1. INTRODUCTION TO THE A NESTHESIA M ACHINE. (~10 minutes) Once participants finished the informe d consent process, they were asked to put on a white lab coat so that they would feel more like an anesthesiologist. The lab coat was also used to reduce potential problems with the clothes of the participants interfering with the color trackers. Participa nts were handed a manual that provided them an introduction to the VAM. The manual was used in conjunction with an online interactive tutorial (http://vam.anest.ufl.edu/simulationhelp.html), which highlighted and explained each of

PAGE 79

79 the major VAM components and subsystems. The purpose of this was to provide users grounding in machine function and structure without giving much in depth information about deeper concepts that are presented during the following training exercises, which were the basis for compari son for the different simulations in each condition. 2. COMPLETE 5 EXERCISES. (~60 minutes) Each participant completed the same 5 exercises by following the manual and using one of the 5 simulations. For each exercise, a question was posed about a specifi c anesthesia machine concept ( e.g., a particular component or subsystem). Then the participant followed a manual which explained how to interact with the simulation to answer this question. Then at the end of the exercise, the answer to the question was gi ven. We timed how long it took for participants to complete the 5 exercises. 3. SPATIAL COGNITION TES TS. (~20 minutes) Participants were given three tests to assess their spatial cognitive ability: (1) The Arrow Span Test, (2) The Perspective Taking Test a nd (3) Navigation of a Virtual Environment. Each of these is outlined i n Chapter 2 and detailed in [21] 4. 4.2 Day 2 (~60 Minute Session) 1. SELF EVALUATION. (~5 Minutes) Participants were asked to rate their proficiency in overall anesthesia machine understand ing that was gained from the previous day. 2. MACHINE COMPONENT IDE NTIFICATION TEST. (~5 minutes) Participants had to recall the name of 17 pictured simulation components ( e.g., a flowmeters icon from the VAM) as a metric for memory retention. We wanted to investigate how each simulation type affects memory retention. 3. MACHINE COMPONENT FUN CTION TEST. (~15 minutes. Participants had to recall the function of the 17 pictured simulation components ( e.g., the scavenging system components function is to cont rol the disposal of excess gasses) as a metric for memory retention and abstract understanding. We wanted to investigate how each simulation type affects memory retention and abstract understanding. 4. MATCHING TEST. (~5 minutes) This test measured how ef fectively participants could mentally map a simulation to the real anesthesia machine and was the primary metric for training transfer. Participants were asked to match components between a picture of the anesthesia machine and a picture of a more abstract simulation ( e.g., the VAM or AAM). 5. SHORT ANSWER ANESTHES IA MACHINE TEST. (~15 minutes) As a metric for abstract understanding, this set of short answer questions tested each participants abstract understanding of how an anesthesia machine works. 6. MULTIPLE CHOICE ANEST HESIA MACHINE TEST. (~5 minutes) This was a metric for abstract understanding. Again, we wanted to test each participants abstract understanding of how an anesthesia machine works. 7. HANDS-ON ANESTHESIA MACHINE FAULT TEST. (~10 minutes) This was a metric for concrete understanding. The purpose of this test was to test how well each simulation

PAGE 80

80 prepared participants for a hands -on concrete scenario. In this test, each participant diagnosed machine malfunctions using only the real machi ne without any additional simulation. 8. PERSONAL/SUBJECTIVE QUESTIONN AIRES. (~5 minutes) Participants were asked several personal questions ( i.e., computer experience, GPA, etc.). 4.5 Metrics TIME TO COMPLETE THE 5 EXERCISES: Participants were timed as th ey worked through the 5 main exercises. MACHINE COMPONENT IDE NTIFICATION TEST: To assess participant memory retention of the machine components, participants were shown a picture of the simulation they used the previous day ( e.g., AAM -MR group saw a scree nshot from the magic lens, AM R saw a picture of the real machine). For the 17 different components of the machine, participants were asked to identify ( i.e., write the name) each letter -labeled component. Each answer was graded out of a maximum of 4 point s for a perfect answer. Points were deducted for partial or incorrect answers. MACHINE COMPONENT FUN CTION TEST: To assess participant understanding and memory retention of the machine components, participants were shown a picture of the simulation they us ed the previous day ( e.g., AAM -MR group saw a screenshot from the magic lens, AM R saw a picture of the real machine). For the 17 different components of the machine, participants were asked state the function ( i.e., the purpose of the component and how it affected the gas flow). Each answer was graded out of a maximum of 4 points. Points were deducted for partial or incorrect answers. MATCHING TEST: Participants were shown two pictures: A screenshot of their training simulation and a picture of an anesthes ia machine. Each of these pictures had 17 specific components labeled randomly with letters. The participant had to match each labeled component in the simulation screenshot to a labeled component in the picture of the real machine. The purpose of this tes t was to investigate learning transfer of abstract information into concrete scenarios.Note that physical simulation users (AM R and AM D) did not complete this test because the two pictures shown would have been the same picture. That is, we assumed that if participants were shown two of the same picture of the machine, they would be able to perfectly match components between the pictures. Each answer was graded out of a maximum of 4 points. Points were deducted if the component was incorrectly matched. If participants could not match a real component to a simulation component but could identify the real component, then partial points were given. SHORT ANSWER ANESTHES IA MACHINE TEST: This test consisted of 22 short answer questions, which gave an overall sc ore of a participants abstract knowledge. The tests consisted of short answer and multiple -choice questions from the Anesthesia Patient Safety Foundation anesthesia machine workbook [37] Note that this test assessed mostly abstract knowledge. For example, one Short Answer Anesthesia Ma chine Test question asked, Is the inhalation valve bidirectional or unidirectional and why? To correctly answer this

PAGE 81

81 question, one would need a deep understanding of the flow of invisible gasses in the machine and the function of the valves. Each questio n was graded out of a maximum of 4 points and points were deducted for insufficient or incorrect answers. MULTIPLE CHOICE ANEST HESIA MACHINE TEST: This test consisted of 7 multiple -choice questions, which measured the participants abstract knowledge. The tests consisted of multiple -choice questions from the Anesthesia Patient Safety Foundation anesthesia machine workbook [37] Each question was worth 1 point for a correct answer and 0 points for an incorrect answer HANDS-ON ANESTHESIA MACHINE FAULT TEST. For this test, participants used only the anesthesia machine without any type of computer simulation. The investigator first caused a problem with the machine ( i.e., disabled a component). Then the participant had to find the problem and describe what was happening with the gas flow. Par ticipant performance on this test was assessed on one metric: if the participant was able to identify the problem causing the machine fault. Participants were given as much time as they needed and stopped when they either identified the problem or quit. In the first semester ( Table 4 2), participants were told that there was a fault and that they had to diagnose it. In the second semester, the difficulty of this test was increased by telling the participants that there may or may not be a fault present. In general, this test assessed the participants concrete knowledge of the machine. SELF-REPORTED DIFFICULTY IN VISUALIZING GAS F LOW (DVGF) When participants had completed the hands -on test, the investigator explained what it meant to mentally visualize th e gas flow. Participants were then asked to self rate how difficult it was to mentally visualize the gas flow in the context of the real machine on a scale of 1 (easy) to 10 (difficult). 4.6 Results and Discussion The purpose of this study was to investiga te the specific cognitive benefits of mixed simulators as compared to other common simulation types. To conduct this investigation, mixed simulators were compared to four other types of simulations with varying information representation, interface, and di splay. However, due to logistical reasons, not all conditions could be conducted concurrently. The entire study was conducted over two academic semesters but not all conditions were conducted for both semesters and some metrics changed between semesters (T able 4 2). However, since there were significant findings in both semesters and some of the conditions were repeated in both semesters, this section presents a meta analysis of the results.

PAGE 82

82 This section is divided into two subsections. First, we will discu ss the results with respect to learning outcomes. That is, we will explain how the simulation type impacted learning effectiveness. Second, we will discuss how these learning outcomes may be correlated to spatial cognition. 4.6.1 Learning Outcomes Results As shown in Table 4 3, abstract simulator users (VAM Group) and physical simulator users (AM R group) performed significantly ( Table 4 4) faster than the mixed simulator users (AAM -MR) in completing the 5 training exercises. These results suggest that mixed simulators (AAM -MR) are less efficient than other simulation types. However, note for each of the simulation types, training time had minimal ( i.e., not significant) correlation to other metrics, such as written tests and fault tests. This suggests tha t time is not a confound with respect to the results of the other metrics. That is, although the differences in time are statistically significant, the effect of time on learning outcomes is not significant. However, it is unclear why mixed simulator users (i.e., AAM -MR group) had more variance in training time and an increased average time. Some mixed simulator users trained for significantly longer times ( e.g., two standard deviations higher than the average) than most mixed simulator users. It is possible that these mixed simulator users were more interested in the hands -on AAM -MR interaction and spent more time visualizing the machine with the magic lens. It is also possible that training time increased to the increased complexity of the visualization. Also, it is possible that mixed simulator users spent additional time mentally merging the visualization concepts with the real world. However, more work will be needed to determine the cause of increased training times.

PAGE 83

83 4.6.1.2 R esults of abstract concept understanding metrics: machine component identification test, machine component function test, short answer anesthesia machine test, multiple choice anesthesia machine test Results in Table s 4 5, 4 6, and 47 show that abstract simulator users ( e.g., VAM group) have greater average scores than other users on metrics of abstract concept understanding. These results are significant as shown in Table 4 8 columns Machine Component Identification Test, Machine Component Function Test, and Short Answer Anesthesi a Machine Test. This suggests that abstract simulators train abstract concepts more effectively than other types of simulators. Intuitively simulations that represent information more abstractly are likely to train abstract concepts more effectively. Tha t is, there should be a correlation between the level of information abstraction and the effectiveness of abstract concept training. Our results (e.g., Figure 4 1 ) support this idea with respect to the simulation types used in this study. The chart shows h ow abstract concept understanding decreases with the level of information abstraction. For example, VAM represents information the more abstractly than the other simulations and likewise trains abstract concepts the more effectively. On the other hand, th e AM D, a photorealistic simulation, has arguably the least abstract information representation of the four conditions in Figure 4 1. This less abstract simulation trains abstract concepts less effectively. Further, mixed simulators ( i.e., AAM -MR) are les s abstract than abstract simulations ( i.e., VAM) but more abstract than physical (or photorealistic) simulations (AM D). The average scores in Table s 4 5, 4 6, and 4 7 show that abstract concept understanding in mixed simulation falls between that of abstr act and physical simulation. In general, these results from Table s 4 5, 4 6, 4 7, 48 and Figure 4 1 support the hypothesis H1 in that abstract concept understanding decreases with the level of information abstraction in a training simulation. In general, mixed simulation may enable a level of information abstraction between abstract and physical simulation. If we consider mixed simulation from an educational

PAGE 84

84 scaffolding perspective, then it enables another level of fading. That is, a student could progres s through the simulations in the order of decreasing level of abstraction ( i.e., (a) VAM, (b) AAM D (c) AAM -MR (d) AM D) as shown in Figure 4 1. Then as the student progresses through using each of the systems, the scaffolds of abstraction are gradually r emoved. Because of this, we expect that a mixed simulators additional level of abstraction will enable a smoother transition between abstract and physical simulations. This concept is also supported by psychology literature [19] 4.6.1.3 R esults of concrete understanding: fault tests The fault test assessed participants concrete knowledge of the machine by forcing them to interact with the machine without the use of a simulation. In this test, the participant was first sent outside of the room. The investigator then removed a small, yet vital piece of the inhalation valve (called the leafle t) that allowed gas to flow in both directions through the valve. This simulated a leak in the valve. In a real scenario, this leak would cause the patient to rebreathe carbon dioxide. When participants returned to the room, at first glance the system appe ared to be operating normally ( i.e., there were no alarms sounding). Participants had to detect and identify that a small piece was missing from the inhalation valve. Results show that there was a significant difference ( Table 4 10) between the groups in concrete concept understanding ( i.e., Fault Test metric). As shown in Table 4 9, physical simulation users ( i.e., AM R Group) found the faults the most frequently and thus had the best concrete concept understanding. Abstract simulation users ( e.g., VAM g roup) found faults least frequently and thus had the worst concrete concept understanding. Mixed simulation users ( e.g., AAM -MR group) frequency was between that of abstract and physical simulation users. In general, a simulation that represents informat ion more concretely will train concrete concepts more effectively. Again, these results show that mixed simulators effectively mix these two types

PAGE 85

85 of information, which results in a mixing of abstract and concrete understanding. This supports the notion th at mixed simulation can offer a smoother transition between abstract and physical simulation. 4.6.1.4 R esults of matching test results The matching metric directly assessed participants ability to map the simulation components to the corresponding real ma chine components. As shown in Table 4 11, mixed simulation users ( i.e., AAM -MR Group) were found to have significantly ( Table 4 12) better matching scores than abstract simulation users (VAM group) (p= 0.054). Surprisingly, magic lens based mixed simulato r users ( i.e., AAM_MR) also had significantly better matching scores than the desktop monitor -based system (AAM -D group) (p=0.037). This supports the hypothesis H2 that mixed simulation significantly improves transfer of learning from abstract to concrete domains. The AAM -MR group performed matching significantly better than AAM -D even though the information representation was the same ( i.e., the computer generated visualization). The main differences between these two conditions are the display and the int erface. AAM -MR uses a 6DOF magic lens display and an anesthesia machine interface. AAM D uses an untracked desktop monitor display and a mouse and keyboard interface. Because AAM MR users performed better than AAM D in the measure of training transfer ( i.e ., matching), this suggests that the more immersive, hands -on interface of the magic lens and real machine may more effectively enable training transfer than the less immersive interface of AAM D. 4.6.1.5 R esults of self -reported difficulty in visualizing gas flow (DVGF) After the fault test, participants were asked how difficult it was for them to mentally visualize the gas flows in the context of the real machine during the fault test ( i.e., the DVGF metric). As shown by averages in Table 4 13, the mixed simulation users ( i.e., AAM MR) had

PAGE 86

86 significantly) lower Self Reported Difficulty in Visualizing Gas Flow (DVGF) (see Table 4 14 for significance). These results suggest that mixed simulator users found it easier to visualize gas flow in the context of the real machine than any of the other users who trained with other simulation types. This suggests that mixed simulators may be helpful for recalling and understanding spatial information such as gas flow dynamics. However, it was particularly surprising th at AAM -MR users had less difficulty than AAM D users in visualizing gas flow. AAM -MR and AAM D share the same computer -generated visualization. However AAM -MR has a magic lens display and real anesthesia machine interface whereas AAM D has a desktop monito r display and mouse and keyboard interface. It is possible that group AAM -MR had an advantage due to their physical interactions with the machine as opposed to the keyboard and mouse interaction of group AAM D. These physical interactions may have served a s grounding for their visual memory. T his difference in DVGF could also be due to differences in the magic lens interaction style and the desktop computers interaction style. In the AAM D, users often picked a convenient, stationary viewpoint that allowe d them to visualize all the gas flows at once. In the AAM MR, however, the lens is oft en used like a magnifying glass that many of the participants used to visually follow the gas flows in the simulation ( i.e., they observed a zoomedin view and moved the lens along the direction of the flow). This type of intuitive lens interaction may have made it easier for them to mentally visualize and recall gas flows when not using the simulation. 4.6.1.6 D iscussion of proposed usage of simulations for scaffolding It is important to note that each simulation in the study offers a different representation of an anesthesia machine and each representation offers a different level of abstract and concrete knowledge. This suggests that each of the simulations ( e.g., VAM, AAM -MR) used in the study may offer a different level of scaffolding. That is, training can be scaffolded through the fading

PAGE 87

87 of (1) level of information abstraction (2) level of interaction ( e.g., immersion) and (3) proportion of virtual and real objects Because each of the simulation types offers different scaffolds, we propose that all of these simulation types should be used progressively in the learning process to scaffold learning in different stages of competence. 4.6.1.7 D iscussion of the scaffold ing benefits of mixed simulators One of the main purposes of this work was to determine the specific scaffolding benefits of MR -based displays and interfaces (such as magic lenses and tangible interfaces), specifically mixed simulators. Critics have sugges ted that using a desktop-based system (such as AAM -D) may have equivalent learning benefits to a MR based system (such as AAM -MR). This skepticism is understandable in the case of the AAM -MR because AAM D groups desktop renders exactly the same visuals as the magic lens in AAM MR. The only differences between the systems are (1) the type of interaction and (2) the context of the rendering. Perhaps the collocated, in context rendering coupled with collocated physical interaction helps to solidify the spati al and functional relationships between the abstract simulation and the concrete device. There is also a unique aspect of magic lenses that may improve scaffolding. The user can interactively fade the scaffolding in various ways. We have observed users zo oming out and using the lens to view the whole simulation at once. Other times, users interact with the lens like a magnifying glass inspecting details of the simulation in-context. For example, the user can observe gas flows of a specific component in the context of the real machine while still experiencing the human field of view (~120 degrees). Within the bounds of the lens, a subset of the users view visualizes the overlaid abstract simulation. This subset is shown within the context of the surround ing real -world view. Furthermore, users can interactively choose to lower the lens and observe only the real machine. These types of behaviors are examples of the interactive fading that the magic lens interface affords.

PAGE 88

88 We have not yet performed an analys is to quantify how often participants perform these interactive fading behaviors or what effect its frequency has on their performance. However, we have observed many users performing interactive fading quite often. We expect that this interactive fading may be one of the causes for the AAM -MR groups improved training transfer. 4.6.2 Learning Outcomes Correlation to Spatial Cognition This purpose of this section is to outline the impact of the various simulation types on spatial ability. Note that the spat ial ability test results and personal/subjective question naires results are omitted here ( i.e., the scores are omitted in our report of the results) since there were no significant differences between groups for these. However, Pearson correlations between other metrics and spatial ability test results were found to be significant and are presented here. For Pearson correlations, the significance is marked as follows: is p<0.1, ** is p<0.02, *** is p<0.01. 4.6.2.1 D iscussion of DVGF correlation to spatial cognition The correlations ( i.e., Table s 4 15) can be interpreted as follows. Higher DVGF scores means the participant had greater difficulty visualizing gas flow. For the Arrow span test, the best score was 60, and decreasing scores denotes lower small -s cale ability. For large -scale ability, the best sketch map score was 0, and increasing scores denote lower large -scale ability. For example in Table 4 15, the VAM Groups Sketch maps had a +0.61 correlation to their self reported DVFG scores. This means th at when a VAM user finds it more difficult to visualize gas flow (DVGF) then they also tend to have a lower large -scale spatial ability. Conversely, mixed simulator users (AAM -MR) had a non -significant 0.11 correlation between spatial cognition tests and DVGF scores. This means that mixed simulator users found it easier to mentally visualize gas flow regardless of spatial ability. That is, in other simulations ( e.g., VAM), users

PAGE 89

89 with low spatial ability had more difficulty in visualizing gas flow in the c ontext of the real machine. This suggests that mixed simulators compensate for low spatial cognition. It is also possible that mixed simulators compensate for limitations in spatial cognition with more users than just the low spatial ability users ( e.g., a ll users) However, more analysis is needed to determine this. Interestingly, both AAM -MR and AAM D users spatial cognition test scores had minimal correlation to gas visualization difficulty ( Table 4 15). Although these systems had different interfaces and displays (see Table 4 1), both utilized the same computer generated visualizations the abstract VAM components were superimposed over a photorealistic geometric model of the real machine. In both cases, spatial cognition minimally affected gas flow visualization ability. This suggests that the visualization method of superimposing abstract models over photorealistic geometric models may help to compensate for users with low spatial cognition in visualizing and recalling dynamic spatial information. 4.6.2.2 D iscussion of abstract concept understanding correlation to spatial cognition The correlations between written tests ( i.e., the metric for abstract concept understanding) and spatial cognition tests ( Table 4 16) can be interpreted as follows. On th e written test, a higher score denotes a better understanding of the information. This is correlated in the VAM and AM R groups to spatial ability. For example when a VAM user has a higher large -scale ability score, they tend to better understand the information (higher written test score). A similar effect in small -scale ability can be found with the AM R group. It is not surprising that individual differences in spatial ability correlate with performance on a written, verbal test, since in the present ca se the knowledge being tested involves the dynamics of gas flow and causal relations among machine components.

PAGE 90

90 Results suggest that AAM -MR and AAM -D participants spatial cognition had lesser impact on written test performance ( Table 4 16) than the other types of simulation. The written test was a measure of participant understanding of anesthesia concepts. In the case of AAM MR and AAM D, lower levels of spatial cognition skill did not impede their understanding as it appeared to in the VAM and AM -R group s. This suggests that the contextualization method of superimposing abstract models over physical (or photorealistic in the case of the AAM D) phenomena may compensate for users with low spatial cognition when users are presented with complex concepts. 4.6 .2.3 D iscussion of matching correlation to spatial cognition Matching was a measure of ability to map the simulation components to the real phenomenon. As stated in section 4.6.1.4, results suggest that AAM MR significantly (p = 0.04) improved matching ability ( Table 4 11). One reason for this improvement may be that the AAM compensated for low spatial cognition ( Table 4 17). In the AAM, spatial cognition test scores were significantly (using Fisher r -to -z transformation, p=0.06) less correlated to the mat ching scores than the VAM. VAM participants that scored lower in the matching had lower spatial ability. This suggests that the AAM -MR compensates for low spatial cognition and supports our hypothesis H3. Because of this compensation for low spatial cogni tion, mixed simulators may be effective in addressing the aforementioned spatial mapping problem. 4.7 Chapter Summary This chapter presented the results of a study that investigated the impact of mixed simulators on spatial cognition and learning. In the s tudy, a mixed simulator was compared to four other types of simulation, each varying in information representation, interaction, and display. We hypothesized that mixed simulators would enable users to merge abstract and concrete knowledge through the comp ensation for low spatial cognition.

PAGE 91

91 Results from anesthesia education metrics for abstract understanding and concrete understanding showed that mixed simulators users scored approximately halfway in between abstract simulator users and physical simulator users. Moreover, in the metrics for transferring the abstract knowledge to the physical simulator context ( i.e., matching and DVGF), mixed simulator users scored better than users of other simulator types. These results suggest that mixed simulators offer a representation, interface, and display that may effectively enable scaffolding between abstract simulators and physical simulators. Moreover, abstract and physical simulator user data had significant correlations between the abstract understanding metri cs and spatial ability. However, mixed simulator user data had significantly less correlation between abstract understanding metrics and spatial ability. This suggests that mixed simulators compensate for users with low spatial ability. We believe that thi s enables users to merge their abstract concrete knowledge more effectively than with other types of simulation. This merging likely aids mixed simulator users in scaffolding and learning transfer between abstract and physical simulations. 4.8 Conclusions Psychology research has shown that scaffolding with multiple representations that fade from abstract to concrete is beneficial in learning difficult concepts [27, 28, 34, 35, 66] The results of our user study with the AAM support the thesis in that the mixed simulator allows users to visualize, interact with abstract concepts in th e context of the real world, thereby compensating for low spatial cognition and enabling the user to more effectively transfer their abstract knowledge into the corresponding real world scenario than current methods of simulation based training. The result s suggest that the mixed simulator may be an effective educational scaffolding tool that can bridge abstract and concrete knowledge in the learning process through visual and interaction collocation. In the next chapter, we extend this approach

PAGE 92

92 to immersiv e visualization of large data sets and explore temporal collocation of current and past timelines in the novel application of collocated after action review. Table 4 1 Differences in compared simulations Simulation Information Representation Interface D isplay VAM abstract mouse desktop monitor AAM D mixed abstract and concrete mouse and keyboard desktop monitor AAM MR mixed abstract and concrete anesthesia machine magic lens AM R concrete anesthesia machine none AM D concrete mouse desktop monitor T able 4 2 Study c onditions and m etrics u sed p er s emester Semester Conditions Metrics Used Fall 2007 AAM MR (n=20), VAM (n=20), AM R(n=20) Time to Complete 5 Exercises, Spatial Cognition Tests, Short Answer Anesthesia Machine Test, Multiple Choice Anes thesia Machine Test, Fault test, DVGF Spring 2008 AAM MR (n=19), VAM(n=18), AAM D(n=18), AM D(n=15) Spatial Cognition Tests, Machine Component Identification Test, Machine Component Function Test, Matching Test, Short Answer Anesthesia Machine Test, Mult iple Choice Anesthesia Machine Test, Fault test, DVGF Table 4 3 Time to c omplete the 5 t raining e xercises results (first semester) Group Average Minutes Stdev VAM 21.5 5.49 AAM MR 35.6 15.68 AM R 23.1 4.67 Table 4 4 Time to complete 5 exercises u nivariate ANOVA tests (pair -wise differences shown) Conditions Compared Time to Complete 5 Exercises AAM MR VAM <0.001 AAM MR AM R 0.002 VAM AM R 0.324

PAGE 93

93 Table 4 5 Machine c omponent i dentification t est r esults Group Average Score per Question Stdev VAM 1.73 0.74 AAM D 1.55 0.59 AAM MR 1.39 0.58 AM D 1.02 0.81 Table 4 6 Ma chine component f unction t est r esults Group Average Score per Question Stdev VAM 2.24 0.92 AAM D 1.72 0.72 AAM MR 1.58 0.83 AM D 1.12 0.83 Table 4 7 Short a nswe r anesthesia machine test results Group Average Score per Question Stdev VAM 1.83 0.73 AAM D 1.71 0.57 AAM MR 1.45 0.67 AM D 1.27 0.47 Table 4 8 Abstract concept understanding univariate ANOVA tests (pair -wise differences shown) Groups Compared Mac hine Component Identification Test (p value) Machine Component Function Test (p value) Short Answer Anesthesia Machine Test (p value) AAM D VAM 0.460 0.074 0.577 AM D VAM 0.005 <0.001 0.016 AM D AAM D 0.036 0.051 0.065 AAM MR VAM 0.147 0.021 0 .077 AAM MR AAM D 0.499 0.635 0.242 AAM MR AM D 0.127 0.119 0.434

PAGE 94

94 Figure 4 1 A verage function understanding vs. level of abstraction. Table 4 9 First semester fault test results Group #Participants Correctly Diagnosed VAM 3 of 20 AAM MR 9 of 20 AM R 18 of 20 Table 4 10. Fault test c hi -squared test results Groups Compared Fault Test (chi squared) AAM MR VAM 0.082 AAM MR AM R 0.006 VAM AM R <0.001 Table 4 11. Matching test results Group Average Score per Question Stdev VAM 2. 56 0.95 AAM D 2.50 0.99 AAM MR 3.12 0.84 Table 4 12. Matching test univariate ANOVA results (pair -wise results) Groups Compared Matching Test (p value) AAM D VAM 0.875 AAM MR VAM 0.054 AAM MR A A M D 0.037

PAGE 95

95 Table 4 13. Self reported difficult y in visualizing gas flow (DVGF) Group Average Stdev AAM MR 3.79 1.72 VAM 5.28 2.13 AM_R 5.50 1.91 AM D 5.41 2.18 AAM D 5.52 2.10 Table 4 14. Analysis of DVGF v ariance ( u nivariate ANOVA with pair -wise differences) Groups Compared p value AAM MR AM R p = 0.01 AAM MR VAM p = 0.05 AAM MR AM D p = 0.04 AAM MR AAM D p = 0.01 Table 4 15. DVGF c orrelations to spatial cognition tests Group Arrow Span Nav. Sketch Map AAM MR +0.01 0.06 VAM 0.40* +0.61*** AM R 0.53*** +0.16 AM D 0.02 0. 30 AAM D +0.12 0.04 Table 4 16 Written test scores correlations to spatial cognition tests Group Arrow Span Nav. Sketch Map AAM MR +0.17 0.33 VAM +0.32 0.50** AM R +0.61*** 0.23 AM D +0.13 0.08 AAM D 0.19 0.38 Table 4 17. Matching corre lations to arrow span test Group Correlation to Arrow Span VAM 0.63*** AAM D 0.37 AAM MR 0.29

PAGE 96

96 CHAPTER 5 IMMERSIVE VISUALIZAT ION OF LARGE DATA SE TS WITH MIXED SIMULA TORS Immersive displays and interfaces have been shown to afford users specific cogniti ve and perceptual benefits for information visualization [64] For example, immersive displays such as CAVEs can augment human perception in spatially complex scenes, such as visualization of multi -dimensional data and large data sets [7]. However, large data sets are typically represented abstractly in imme rsive displays. While this abstract representation does likely have perceptual benefits, it still may be difficult for many users to apply the abstract information in real world scenarios. Since mixed simulators were successfully applied to a similar probl em as presented in Chapter s 3 and 4, this chapter presents a mixed simulator based immersive visualization system for large data sets ( e.g., the logged gaze and interaction data from the training sessions of an entire class of students) When visualizing large sets of data it is important to consider focus + context [64] That is, certain data may be of higher interest to the user, but this data may not be meaningful to the user if not presented in the context of the surrounding related data ( e.g., the corres ponding real world phenomena). To address this issue, we present a novel mixed simulator approach to visualize focus data in the context of the real world. Specifically, we apply mixed simulators to visualization of spatiotemporal training data aggregated from a group of students who attempted to diagnose and repair malfunctions in an anesthesia machine. Our implementation of the mixed simulator uses a magic lens to enable interactive focus as well as displaying abstract visualization of aggregate anesthesi a machine interactions in the context of the real anesthesia machine. The purpose of this work is to enable novel abstract visualization of large datasets while maintaining relevant contextual information from the real world. Note that this work was publi shed at ISMAR 2008 [57]

PAGE 97

97 5.1 Mixed Simulators for Immersive Visualization of Past Experiences In this chapter, we investigate how mixed simulators can enhance immersive information visualization in an important real world application after action review (AAR) (self -debriefing) of training experiences. In training applications ( e.g., militar y [9] and medical training [78] ), MR collocates real and virtual information, which can enhance visualization, interaction, and learning during training as exemplified with the augmented anesthesia machine (AAM) in Chapter s 3 and 4. However, MR is rarely used after the experience (the AAR phase) when there is possibly a much larger amount of data to visualize. Most current AAR systems consist of reviewing videos of a student training experiences which allows students and educators to playback, critique, and assess performance. However, video based review consists of non immersive fixed viewpoints, primarily real -world information ( i.e., no virtual overlay or augmentations as found in MR), and fe w aggregate visualization techniques. Aggregate visualization coul d facilitate a much needed analysis of class wide trends. To address these challenges, we propose to augment AAR with a mixed simulator to facilitate collocated AAR immersive and contextua lized abstract visualization of past experiences collocated inside the training area. Mixed simulators enable overlay of abstract information in context with the real world, a user -controlled egocentric viewpoint, and tangible interaction with real objects. These properties may increase the level of immersion in after action reviews and provide novel interaction approaches for visualization that will address the challenges of maintaining focus + context in information visualizations. To investigate this, we designed and evaluated a mixed simulator -based collocated AAR system for immersive visualization of past experiences the Augmented Anesthesia Machine Visualization and Interactive Debriefing system (AAMVID). It merges the playback features of AAR with the immersive augmentation features of mixed simulators. AAMVID features include

PAGE 98

98 a user -controlled review experience from a first -person viewpoint. Users can review an abstract simulation of an anesthesia machines internal workings that is registered to a real anesthesia machine ( Figure 5 1). During the AAR, previous interactions are collocated with current rea l time interactions ( Figure 5 1 ) enabling interactive instruction and correction of previous mistakes in situ ( i.e., in place with the anesthesia machine). Similar to a video based review, AAMVID meets many of the educational needs of educators and students by offering recording and playback controls. Further, AAMVID facilitates focus + context visualization by collocating abstract visualizati on of recorded experiences ( i.e., focus) with the anesthesia machine and the users current real -world experience ( i.e., context). AAMVID is an extension of our previous work with the Augment ed Anesthesia Machine (AAM) [58] Based on our previous findings explained in Chapter 4, we expect that the AAMs cognitive benefits will transfer from immersive real time training visualization to immersive visualization of past experiences ( i.e., larger data sets). From an application standpoint, collocated AAR must meet the pedagogical needs of educators and students. Students need directed instruction, repetition (deliberate practice), and feedback to bri ng them to a level of competency. Educators need to assess students approaches to problems. To make this assessment, they need to identify means, outliers, and class -wide trends. This work investigates how mixed simulators can enhance immersive visualizat ion to meet these pedagogical needs. Because of the different educational needs of educators and students, two versions of AAMVID were created and evaluated separately. The student version (AAMVID -S) enables students to review and interact with both their own previous interactions and an experts previous interactions. The educator version (AAMVID E) enables educators to visualize and interact with

PAGE 99

99 the aggregated performance of multiple students. Both the student and educator visualize this abstract focus data in the context of a real anesthesia machine. To enable this type of experience, there is much data that must be visualized in context with the real world. To visualize this data, the following research challenges must be addressed in an immersive visu alization system: Time: collocating playback time with real time Interaction: collocating recorded expert or student interactions with current interactions Visualization: collocating recorded users viewpoints and abstract information with the current u ser -controlled view. In this chapter, we propose a novel mixed simulator -based approach that addresses these needs. Mixed simulator -based immersive visualization is evaluated in two usability studies. In the first study, nineteen students learned about ane sthesia machines and then used AAMVID S to review their experiences. Then, three educators used AAMVID E to review the aggregate data obtained from the nineteen study participants. These studies aimed to determine the advantages of applying mixed simulator s for immersive visualization of large data sets./ This chapter is organized as follows. In section 5.2, we describe previous work that is relevant to the application of AAR. In sections 5.3 and 5.4 we describe the design and usability evaluation of AAMVID -S for students. Finally, in sections 5.5 and 5.6 we describe the design and usability evaluation of AAMVID E for educators. 5.2 After Action Review After Action Reviews (AAR) originally stem from the war games practiced in military command strategy revi ew ( e.g., outcomes after moving troops). AAR was later developed to review combat missions and training exercises for both commanders and soldiers. For example AAR allowed soldiers, to discover for themselves what happened, why it happened, and how to

PAGE 100

100 su stain strengths and improve on weaknesses [13] Since then, AAR has been extended into the industrial, medical, and educational domains. 5.2.1 After Action Review Systems There are numerous AAR systems for many fields of training. For military training, TAARUS [1] and DIVAARS [33] use maps and graphs to a llow AAR of troop movements and of battlefield simulations. More generally, behavior has been studied usi ng AAR. For example, Phloem [49] visualizes large sets of behavioral data. AAR has been used for review of virtual e xperiences as well. IPSVis [61] is an AAR system geared towards Interpersonal Simulation, specifically Human -Virtual Human interaction. Medical students use IPSVis for AAR of physician-pat ient interviews using virtual human patients. IPSVis was shown to impact students self perception of how effectively they communicated with patients. Collocated AAR builds upon these previous approaches to AAR. Unlike the previous AAR systems, collocated AAR is immersive in that it is performed in the context of the training area. Thus, users perform collocated AAR in the same space that they trained, rather than at a desktop or another non -immersive remote location ( e.g., at home) outside of the training area. In mixed and virtual reality, there has also been some relevant work in using immersive visualization of expert interactions to direct training in the context of the surround ing real world. Chua et al. [11] created a system to train students with expert Tai chi movements from a user controlled, f irst person pe rspective. Sielhorst et al. [65] created new ways of quantitatively comparing expert and novice 3D interactions in Augmented Reality (AR) with an application to forceps delivery training. In addition, Sielhorst et al. effectively collocated the novice and expert interaction vi sualizations.

PAGE 101

101 5.2.2 Video-Based AAR in Education In training and education ( e.g.,, healthcare and anesthesia education), students need repetition, feedback, and directed instruction to achieve an acceptable level of competency, and educators need assessmen t tools to identify trends in class performance [27] T o meet these needs, current video -based AAR systems offer educators and students the ability to playback (i.e., play, fast forward, rewind, and pause ) training sessions repeatedly and at their own pace. Some video -based AAR systems (such as Studiocode [67] which costs $25,000) allow educators to manually annotate the video. That is, users can visually mark important moments in the video such as when a mistake was made and what kind of mistake. This typ e of annotation helps to direct student instruction and educator assessment. Video -based AAR is widely used in training because it meets many of the educators and students educational needs. However, video -based review consists of fixed viewpoints and p rimarily real -world information ( i.e., the video is minimally augmented with virtual information). The goal of this work is to use immersive visualization to integrate additional interactive and visual feedback modalities that are not present in typical AA R systems. 5.3 Enabling Immersive Visualization for Students with the Augmented Anesthesia Machine Visualization and Interactive Debriefin g System (AAMVID) AAMVID -S is a mixed simulator for collocated AAR of anesthesia machine fault tests in anesthesia edu cation. First a fault is caused in the machine a problem in the machine unknown to the student and intentionally caused by the educator, such as a disabled component. Then students attempt to diagnose and correct the machine fault by interacting with the real anesthesia machine (with no help from additional visualizations or simulations). Once this test is completed, students can use AAMVID -S for the collocated AAR of the test. This section describes the AAMVID -S implementation and presents the results of a usability study with 19 students. The

PAGE 102

102 goal of this research is to evaluate the advantages of mixed simulators in a novel focus + context approach for immersive visualization. 5.3.1 Visualizing Past Experiences in Context with the Real World The applic ation specific goals of AAMVID -S are to allow students to (1) review their performance in the context of the training environment (2) review an experts performance for the same fault in the context of the training environment, (3) interact with the physic al anesthesia machine while following a collocated expert guided tutorial, and (4) observe a collocated abstract visualization of the machines internal workings during (1), (2),and (3). To engineer these features, w e used a tracked 6DOF magic lens display and designed software that logged student and expert interactions. During the AAR, AAMVID S allows a student to playback previous interactions, visualize the chain of events in context with the real machine that made up the previous interactions, and visualize where the user and the expert were each looking during their respective interactions. To make it easier for students to differentiate between focus and context, we decided to decrease the potential size of data in focus by only allowing students to view the playback of one previous experience at a time ( e.g., a users previous experience or the experts experience). The purpose of this decision was to decrease student confusion. However, AAMVID does visualize interactions from a recorded experience in situ with the users current real world experience and interactions. 5.3.2 Logging Student and Expert Interaction To generate focus visualizations for collocated AAR, two types of data are logged during the fault test: head -gaze and anesthesia machine s tates. For head -gaze the user wears a hat (Figure 5 1 B ), tracked with retro reflective tape and IR sensing web cams. This enables the system to log the head-gaze direction of the user. For the anesthesia machine state, the AAM

PAGE 103

103 tracking system, described i n [58] tracks the states of the machine. The changes in these states are then processed to determine when the user interacted with the machine. A student log is recorded when a student performs a fault test prior to the collocated AAR. Our expert log data was recorded when Dr. Samsun Lampotang an anesthesia educator and a co author of this paper performed each of the fault tests. 5.3 .3 Abstract Visualization of Machine Faults In AAMVID -S, students physically interact with the real machine and use a 6DOF magic lens to visualize how these interactions affect the internal workings and invisible gas flows of the real machine. Similarly, t o visualize fault behavior, specific faults were physically caused in the real machine and visualized in the abstract simulation. For example, one fault involves a faulty inspiratory valve, which can be potentially harmful to a patient. Figure 5 2 A is wha t the student sees in a real machine. Figure 5 2 B is what the student sees on the magic lens during the AAR. Because the magic lens visualizes abstract concepts like invisible gas flow, AAMVID -S allows students to observe how a faulty inspiratory valve af fects gas flow in context with the real valve In Figure 5 2, the abstract valve icons are both open. The horizontal lines are located at the top of the valve icons, which denotes open valves. 5.3.4 Event Chain Visualization When viewing a complex scene in an immersive visualization, it is helpful for users to know where to focus their attention. Moreover, from an application standpoint, to learn from and critique performance students need to review ( i.e., focus on) the specific actions they performed durin g the fault test and compare their actions to an experts actions. To meet these needs, AAMVID -S enables students to abstractly visualize the chain of interaction events that occurred during the fault test. For example, a student or expert might have turne d the O2 flow control knob, then turned on the ventilator, and then pressed the oxygen flush. The AAMVID -S logging

PAGE 104

104 system notes these changes in machine state and visualizes them using interaction event boxes that display in the context of where in the mac hine the interaction was performed. When an event occurs during playback, an interaction event box appears that is collocated with the corresponding control ( Figure 5 3). For example, when the student turned the O2 knob, an interaction box pops up next t o the control and prints out that the student increased the O2 flow by a specific percentage. To direct the users attention to the next event, a 3D red line is rendered that slowly extends from the last interaction event position and towards the position of the next upcoming event. Lines between older events are blue lines indicating that the events have passed. By the end of the playback timeline, these lines connect all the interactions that were performed in the experience. This forms a directed graph w here the interaction boxes are the nodes and the lines are the transitions between them. To enable interaction event boxes, the AAMVID -S uses a logging system coupled with the internal simulation of the machine. The AAMVID -S logging system is built upon th e AAM system, which simulates the gas flow and internal states of the components with a rule based finite state machine [16] (FSM). This FSM takes input from the AAM tracking system. Based upon this input, the AAM updates the visualized internal machine states and gas flows. Changes in the internal states then are used to detect specific interactions events ( e.g., when the user turns the O2 knob, the FSM changes state and the simulated O2 particles visually increase in speed). When an interaction is detected, the state of the simulation is key framed ( i.e., a timestamp, the particle positions, the machine states, and tracking information are all saved in random access memory) for later playback.

PAGE 105

105 Note that there are many alternative approaches to directing user attention to specific objects in mixed reality. Various technique s are evaluated in [62] We chose the 3D line technique due to its minimization of visual clutter when compared to other techniques. 5.3.5 Playback: Manipulating Virtual Time An advantage of traditional video -based AAR systems is the ability to play, pause, rewind and fast -forward. AAMVID -S implements this playback interface with 2D buttons that users click with a pen interface. AAMVID -S users are able to jump (fast forward) to the next interaction event, jump (rewind) to the previous event if they missed something, or pause the playback to observe the interaction at their own pace. One additional advantage of AAMVID S is that it allows students to view any point in time from a user -controlled viewpoi nt. For example, students can pause the interaction playback and then move to a different viewpoint to visualize a key point in time or previously occluded information ( i.e., internal gas flows). 5.3.6 Look a t I ndicator Knowing where to direct and focus at tention is one of the difficulties for users of spatially complex visualizations. Similarly, this is also a difficulty that students experience in a real world fault test (and in the AAR of a fault test). There are many concurrent processes in an anesthesi a machine and it can be difficult for students to know where to look to find the faults. To resolve this problem in the collocated AAR, students can see a visualization of where they were looking or where the expert was looking during the fault test (altho ugh not at the same time). To generate this visualization, we tracked the head of each student and of our expert. The resulting look at indicator (the highlighted spotlight in Figure 5 4) helps students to direct and focus their attention in the AAR.

PAGE 106

106 5.3 .7 Viewing Modes One important design decision was that AAMVID -S only allows students to control the playback of one previous experience at a time. We expected that control and visualization of multiple played back experiences would complicate the visual f eedback and confuse students. Instead, AAMVID -S splits its sources of data into three modes. Thus, using the aforementioned visualization and interaction techniques, there are 3 different viewing modes that visualize data from different sources. Each of th ese modes corresponds to specific sets of data that are being collocated with the real world. 5.3.7.1 User view m ode This mode visualizes the students fault test collocated with the real machine. During this mode, the real machine is off (electrical and p neumatic power shut off) to minimize conflicting processes that are visualized in the abstract simulation. For example, it might be confusing to students if the ventilator was on (bellows cycling up and down) during their interaction visualization, but off in the real world. We expected that if students knew that the machine was off, they would treat it as a place holder during the review experience serving only to put the review experience in context with the machine. 5.3.7.2 Expert view m ode This mode v isualizes the experts fault test collocated with the machine in the same way that User View Mode visualizes the users fault test (the machine is also off during this mode for the same reasons as during User View Mode). This type of interactive visualizat ion makes the expertise of the domain expert (whose time is usually in short supply) readily available to essentially an unlimited number of users at any time. In essence, this abstractly visualized expert interaction makes expertise available on demand.

PAGE 107

107 5 .3.7.3 Expert tutorial m ode This mode directs student attention to an interaction with the overlaid interaction event boxes and look at indicator of the expert, but the student must perform the interactions because the abstract simulation visualization com es from the real time tracking data of the anesthesia machine, which is turned on during this mode. This enables the student to (1) visually follow the experts interactions, (2) physically mimic the interactions ( Figure 5 1) and (3) visualize how these i nteractions affect the internal workings of the machine in real time. This promotes a more hands -on learning experience. 5.4 A AMVID -S tudent (AAMVID S) Usability Study A study was conducted to determine the usability of AAMVID S and more generally to identi fy the advantages and disadvantages of mixed simulators for immersive visualization with specific emphasis on user focus + context in spatially complex scenes. Moreover, from an application perspective, we wanted to determine if collocated AAR is a viable form of AAR. In the study, 19 students enrolled in an Introductory Psychology course were first trained using the AAM. Then they were given three machine fault tests. After each test, they used AAMVID -S for AAR, which was performed without an expert pres ent and with minimal assistance from the experimenter ( i.e., the experimenter would answer interface -related questions but not anesthesia related questions). Each participant was given a questionnaire before and after the AAR to gauge how collocated AAR af fected (1) understanding of the machine faults and (2) their level of confidence in their answers. The main purpose of this study was to evaluate our immersive visualization design decisions for focus + context in AAMVID -S. After students use AAMVID -S, we expected to observe the general increases in student confidence and understanding as found in a typical AAR system, but our main aim was to obtain feedback from students about our mixed simulator -based

PAGE 108

108 immersive visualization approach. In the future we p lan to formally compare AAMVID -S to a video based system in order to assess the specific benefits mixed simulators for visualization of past experiences. 5.4.1 Study Procedure For each participant, the study was conducted over a period of two days. The fir st day consisted mostly of anesthesia machine training. The second day included hands -on tests with the real machine. Participants performed an AAR for each of the hands on tests. The second day also included several questionnaires about their opinions of the training and AAR modules and personal information ( e.g., computer usage and experience, Grade Point Average GPA). 5.4 .1.1 Day 1 (~90 minute session) 1. INTRODUCTION. Participants were provided a manual, which first gave them an introduction to the VA M. The manual was used in conjunction with an online interactive tutorial, which highlighted and explained each of the major VAM components and subsystems. The VAM simulation was used to direct the intro because the VAM is an intrinsic component of the AAM as its computational media. 2. COMPLETE 5 E XERCISES. Each participant completed the same 5 exercises by following the manual and interacting with the AAM. Each of the exercises focused on a specific anesthesia machine concept ( i.e., a particular component or subsystem). These are the same five exercises from the study in Chapter 4. 5.4 .1.2 Day 2 (~90 minute session) For logistical reasons and to prevent participant fatigue and avoid testing superficial knowledge and short term retention, we attempted to ha ve a time interval of 24 hours between the Day 1 and Day 2 sessions. 1. TRACKED 6DOF HAT CALIBRATION. To track participants heads during the subsequent tests, participants wore a tracked 6DOF hat. The hat was tracked with the same optical IR based system as the magic lens. For calibration, the student first wore the hat and faced the machine. The base orientation of the hat was recorded. Each student was then asked to look at four different components on the anesthesia machine and the pitch of the tracked orientation was adjusted manually to match these components. This step was taken to improve the accuracy of head-gaze data and potentially better correlate head -gaze to actual eye -gaze.

PAGE 109

109 2. THREE HANDS-ON ANESTHESIA MACHIN E FAULT TESTS/AAR. For each partici pant, the tests were given in random order. For each test, the investigator first caused a problem with the machine ( e.g., disabled a component). Participants were then told that there may or may not be a fault present. Participants then had to find and di agnose the problem. After each test, participants performed a collocated AAR session with AAMVID -S using each of the three viewing modes. 3. PERSONAL / OPINION QUESTIONNAIR ES. Participants were asked several personal questions ( e.g., computer experience) and what, in their opinion, were the most effective and least effective parts of the training and AAR modules. 5.4.2 Metrics UNDERSTANDING. Right after each fault test, participants wrote down what they thought the fault was, and how to correct it. Then t hey went through collocated AAR with AAMVID -S on their own. After the AAR, they again assessed what the fault was, and how to correct it. To measure change in understanding, we measured the change in the quality of their answers. Each fault test was scored on a scale of 0 to 4, 4 being the best possible answer. CONFIDENCE. Right after each fault test, participants wrote down how confident they were that their solution to the fault was correct. Then they went through collocated AAR with AAMVID -S. After the A AR, they again reported their confidence in their answers. Participants rated the confidence in their answers on a scale of 0 to 4, 4 being very confident. SUBJECTIVE BENEFITS T O THE USER. In a questionnaire, we queried what parts of AAMVID S ( e.g., User V iew Mode, Expert Tutorial Mode, look at indicator) were most helpful in directing attention. We also asked questions about usability and potential reuse for future AAR experiences. Opinions were given on a 5 -point Likert scale. 5.4.3 Results This section presents and discusses the results of the study. It is organized by the metrics used: understanding, confidence, and subjective benefits. 5.4.3.1 Discussion of u nderstanding As expected of an AAR system, AAMVID improved participant understanding of machine faults ( Figure 5 5). Prior to the AAR, most participants misdiagnosed the fault or thought that there was no fault after completing each fault test. However, once participants used AAMVID to review the fault, they were able to correct the fault in the real machine and changed their original answers to the correct answers. This suggests that their understanding increased significantly as

PAGE 110

110 expected of an AAR system. This increase in understanding supports the notion that AAMVID is a viable AAR system. However, scores were not perfect (4.0) even after students reviewed an experts experience. We expect that understanding and confidence may not have reached full potential in part due to the interaction box visualization approach. For example, during a fault tes t where an N2O valve was closed, the expert interaction box told the participants to Open the N2O valve. Some participants thought the problem was that the N2O valve was open. However, the valve was already closed and the participant was supposed to open the valve to solve the fault correctly. This demonstrates a disadvantage of using a written description of interactions as found on the interaction event boxes. This suggests that written directions (as found in many MR systems) are not always enough to elicit correct interaction. In this case, a video might have been more helpful in demonstrating the details of an interaction. 5.4.3.2 Discussion of c onfidence As expected of an AAR system, AAMVID increased participant confidence in their answers to questi ons of how to correct the faults and what effect the faults had on the patient (Figure 5 6). Prior to the AAR, participants had relatively low confidence in their assessment of the faults. However, after participants used AAMVID for AAR, their confidence s ignificantly increased. This increase in confidence and understanding supports the notion that AAMVID is a viable AAR system. 5.4.3.3 Discussion of s ubjective b enefits Most participants expressed that they would use the magic lens for machine fault study i n the future because the magic lens was a useful tool in helping them to understand machine faults and to direct their attention during the AAR. Specifically, participants expressed that the Expert tutorial Mode was the most helpful AAMVID mode. Expert t utorial mode overlaid an experts

PAGE 111

111 interaction boxes and look at indicator in situ with the real anesthesia machine and the abstract simulation. This mode allowed participants to observe and physically mimic the experts collocated interactions. Participa nts found the expert tutorials interaction boxes and look at indicator easy to follow and noted how they preferred the modes more hands on experience. This suggests that AAMVIDs collocated interaction boxes and look at indicator approaches are an effe ctive way to focus a students attention and direct them where in the training space to interact. Some participants noted that they did not like the DVD player interface ( e.g., 2D buttons: play, pause) for the playback controls. They found this interface c umbersome and unintuitive to use with the magic lens displays pen. This suggests that there are ergonomics issues with integrating a hand held 6DOF magic lens with 2D time manipulation. Participants typically held the tablet in the non dominant hand and used the pen with the dominant hand. The weight of the tablet in the non dominant hand coupled with the position of the buttons on the screen could have become cumbersome over time. A problem could also be that unlike manipulation of virtual objects in spac e, there are very few interface metaphors for the manipulation of virtual time. Users might benefit from a different type of interface ( e.g., a 3D interface). 5.5 Immersive Visualization of Large Data Sets with AAMVID for Educators With AAMVID -S, educators do have the ability to review individual student interactions, but they are often more interested in visualizing aggregate data to assess class -wide performance. Educators may be interested in identifying trends ( e.g., many students may approach the same problem incorrectly) and outliers ( e.g., the few students who perform exceptionally well or poorly). To meet this need, AAMVID -E can combine data from multiple students and visualize this data via the magic lens. Currently, AAMVID E displays aggregate head-gaze and interactions ( i.e., turning knobs, pressing buttons).

PAGE 112

112 In this section we present a novel mixed simulator -based immersive visualization approach that enables users to visualize and interact with large data sets ( e.g., interaction data from an ent ire class) in the context of the real world. To assess the effectiveness of this approach, we conducted an informal study with three anesthesia education experts. 5.5.1 Gaze Maps Psychology research has shown that the focus of a persons gaze is highly cor related to what they are thinking about [29] A visualization of gaze can help educators better understand the main componen ts that students focus on during a fault test, and allows them to adjust their education plans ( e.g.,, lectures) accordingly. To enable gaze visualization, AAMVID E generates a heat mapped ( i.e., the places where participants focused on appear more hot i n color) visualization of where students were looking during a fault test ( Figure 5 7). That is, the gaze visualization is shown in context with the real machine. This in context heat map approach may help the user to focus attention on the more relevant s patial distribution of gaze data. 5.5.2 Implementation of Gaze Maps Our method of generating gaze maps is described in this section. Note, however, that there may be more efficient methods to generate the gaze maps. Our gaze mapping method utilizes an imag e -based approach, which requires a significant amount of preprocessing time to facilitate the rendering of gaze maps at interactive rates. This method requires a 3D model of the geometry that the gaze maps will overlay. In the case of AAMVID, we used a sca le 3D model of the real machine (shown in Figure 5 7). This model must be registered to the real object ( e.g., the anesthesia machine) that users were gazing at during the experience. We perform the following preprocessing for every data point in a student s log: Get Head position and orientation

PAGE 113

113 Project 4 rays of the viewing frustum (generated by the head tracking data) into the texture space of the 3D model Additively blend a grayscale Gaussian into the current gaze map texture, which is initially transpa rent. The Gaussian is scaled and positioned based upon a quadrilateral formed by the ray intersection points. Map the additively blended textures to a heat scale a 1D array of RGB values that visually appear hotter as the array index increases. In our specific implementation, the gaze maps are alpha blended with underlying textured machine geometry. For example, the machine shown in Figure 5 7 is a textured 3D model. However, gaze mapping could be extended to a see through display. If the registered 3 D machine model was not rendered, then the gaze maps could be blended with a real -time video stream instead. Although preprocessing time increases based on the number of data points, the preprocessing step must only be performed once. After the gaze maps a re generated, they are written to textures, which can be stored on a hard disk for future runs of the application. This texture based approach ensures that the amount of geometry does not increase since there is only need for the one 3D model. This allows the gaze maps to be rendered at an interactive rate ( e.g., 3060 fps, depending on hardware). 5.5.3 Markov Model o f Class Interaction Trends in student interaction are important to improving educators pedagogical approaches. This type of information is us eful in determining whether students unknowingly perform interactions that are potentially harmful to a patient ( e.g., over inflating the lungs with the oxygen flush valve). If the educator is able to isolate such a trend, then they can adjust their lesso n plans accordingly. To meet this need, AAMVID E aggregates and visualizes the interactions of an entire class of students in the context of the real machine.

PAGE 114

114 5.5.3.1 Implementation Each students interaction event chain (explained in section 5.3.4) is int egrated into a simple Markov model [16] A Markov model can be repre sented as a directed graph in which each arc has a probability associated with it. For a given node, all of the arc weights stemming from that node add up to 1. When traversing this graph as in a simulation, the arc weights represent the probability that a subsequent node will be visited. For example, a set of user logs contains a finite set of discrete interaction events. These events form the nodes of the directed graph. The sequence of events forms the arcs of the graph and the frequency of these sequenc es determines the weights on the arcs. Based upon the sequences and frequencies of multiple users events, we can generate a probability ( e.g., the percentage of students that performed the sequence) that a specific sequence of events will occur. These pro babilities are the basis for the resulting Markov model. Given such a model, educators can generate the probability that a student in the class will first increase the N2O and second decrease the N2O This data is visualized as a directed graph (Figure 5 8), which can be collocated with the anesthesia machine and visualized using the magic lens ( Figure 5 9). In this case, mixed simulators offer a novel interaction modality to interact with the interaction model data. Specifically, instructors press button s and turn knobs on the machine in an order of their choosing. That is, by interacting with the machine, the educators can traverse the directed graph of possible interactions. Then the model generates the probability that a student in the class would perf orm that sequence of actions. 5.5.3.2 Student simulation m ode The Markov model can also be used to drive the simulation of a representative student If the educator turns on the s tudent simulation mode, the system will autonomously update the abstract si mulation with interaction events ( e.g., turning a virtual knob) based upon the Markov

PAGE 115

115 model of class interaction. Interaction sequences that are more probable will occur more often. This allows the educators to observe common interactions in class performa nce. 5.5.6 Data Filtering For educators to more effectively identify class trends, it is helpful to be able to filter the data based on certain parameters such as class performance or standardized test results. To meet this need AAMVID E allows educators to interactively filter the data based on parameters that the educator defines before runtime. For example, if the educator wanted to only visualize the gaze data of students with low spatial cognition, they can enter spatial ability test values for each s tudent in the aggregate log files. At runtime, the expert can interactively manipulate sliders to select the range of spatial ability to visualize ( Figure 5 7). AAMVID -Es filtering allows educators to investigate how parameters, such as spatial cognition or standardized test scores, affect gaze and interaction. This interactive filtering allows educators to more effectively identify trends and outliers in the class. 5.6 Expert Evaluation o f AAMVID for Educators The goal of this following evaluation is to i nvestigate the usage of mixed simulators for directing focus and context in immersive visualization of large data sets. From an application perspective, we wanted to investigate the potential benefits of collocated AAR for educators. To meet these goals, three experts in anesthesia education informally assessed AAMVID E. One expert was Dr. Samsun Lampotang anesthesia education expert, inventor of the VAM, and coauthor of this paper. The second expert was David Lizdas Anesthesia Machine Expert, anesthesia simulation programming expert (programmed the VAM). The final expert was Nikolaus Gravenstein, M.D. Professor and Chair of the Anesthesiology Department at the University of Florida. These three experts performed an evaluation by using AAMVID E to

PAGE 116

116 vi sualize and interact with the aggregate training data obtained from the 19 AAMVID -S study participants. 5.6.1 Evaluation Procedure Each expert evaluated the system individually. Each expert was first shown AAMVID -S. Then they were shown all the features of AAMVID E and were asked to interact with it. Afterwards, each expert was interviewed and prompted to elaborate on several questions to determine usability, usefulness, and possible future directions of AAMVID. The questions are as follows: Question 1 : Wha t kinds of class trends are hard to identify? Question 2 : Do you think AAMVID -E could help identify trends in class performance that you could not identify with your current assessment tools? Question 3: Would you prefer the MR -based version, a desktop ver sion, or both for future reviews? Question 4: What would you like to see done differently in the future e.g., visualizations, interfaces, filters? 5.6.2 Discussion Question 1: Class trends? The experts answers to this question suggest that educators nee d to be able to identify student misconceptions. Further, they need to understand why students have these misconceptions. To understand this, they are very interested in probing the thought processes of their students that cause the development of incorrec t procedural skills (such as performing a sequence of actions in the wrong order). This understanding would enable educators to change their teaching methods and address the misconceptions in training, thereby improving training overall. Question 2: Can AA MVID help identify trends that you could not identify before with current assessment tools? All of the experts answered yes to this question. They explained that

PAGE 117

117 the gaze data visualization coupled with the interaction sequence visualizations gives them a better understanding of the thought processes of students. Their impressions are summed up in the following quote from |Dr. Lampotang: [AAMVID -E] really gives us a new tool that we havent had before, which is really getting a bit closer to seeing what [th e students] are thinking. Question 3: MR vs. desktop? We asked the experts if they would prefer the MR -based version or a desktop version. The experts noted that they would prefer both desktop and the MR versions. For convenience, (and because the lens ca n be cumbersome at times) they would like to use a desktop version for personal review to visualize the data in their office or on their own computer. They would use the MR version for external review to (1) perform an instructor assisted AAR with a st udent (using AAMVID S) (2) visualize the data for non -educators (such as anesthesia machine engineers) and (3) physically interact with the machine to manipulate the data. This highlights the advantages of mixed simulators for immersive visualization and interaction. The desktop version of AAMVID is a 3D graphics application that is controlled with a keyboard and mouse. The gaze maps and interaction graphs are still visible and interactive, but the visualization and interaction takes place on a desktop computer, rather than on the magic lens in the context of the anesthesia machine. Question 4: Future directions? The experts conceived other general uses for AAMVID -E in various domains. For example, they would like to use AAVMID -E in the future to visualiz e the economy of motion. That is, experts (medical and non-medical) are highly efficient in their interactions whereas novices might fumble or take more time between interactions. Based upon this economy of motion, educators would like to be able to track and visualize a novices progress as more expertise is gained. AAMVID E currently only visualizes frequencies and

PAGE 118

118 sequences of interaction and does not take into account this economy of motion. In the future, we hope to incorporate this aspect. Dr. Gravens tein was specifically interested in applying collocated AAR to enhance the assessment of other training applications outside of anesthesia, such as surgery. His comments highlight the generalizability of the AAMVID -E immersive visualization approach and co llocated AAR: You are presenting us with a new way to look at this kind of stuff in our weird environment. And the application isnt unique to our environment; the application is really in any environment where there are degrees of ability especially whe re there are lots of steps and complexities that you have to sort out. 5.7 Chapter Summary This chapter investigated the use of mixed simulators for immersive visualization. Specifically, we were interested in how the additional tangible interaction and ab stract visualization shown in context with the real world could present novel and effective focus + context visualizations. We investigated this through evaluating mixed simulators in an application of immer sive visualization collocated AAR. Mixed simul ator -based collocated AAR augments the traditional AAR process by (1) allowing a user -controlled egocentric viewpoint, (2) overlaying virtual information that enhances learning ( i.e., abstract simulation and automatic annotation of interaction events), and (3) collocating multiple training experiences in situ with the real training area ( i.e., collocating ones own previous experience, an experts previous experience, and current real -time experience) through immersive visualization in the context of the re al world. To evaluate the systems approach to collocated AAR, both students and educators evaluated AAMVID. Results of our studies suggest that: (1) mixed simulators enable a viable type of AAR and can effectively direct a students attention and interac tion and (2) mixed simulator -based immersive visualization and

PAGE 119

119 interaction offer educators novel assessment tools that, according to the educators, may help them to better understand the elusive thought processes of students. 5.8 Conclusions One of the gra nd challenges of the area of visualization is to perceptibly render large amounts of spatiotemporal data such as the gaze and interaction data described in this chapter. Large amounts of raw data ( e.g., a massive, multi -dimensional table of numbers) are difficult or impossible for humans to mentally synthesize. The purpose of visualization is to aid in the perception and understanding of trends in this data and enabling users to perceive focus and context within the data. The system presented here offers a n approach that addresses these challenges. As suggested by our user studies, our novel mixed simulator based collocated AAR approach offers additional interactive immersive visualizations that focus user attention while enabling real world context for the abstract visualization. Ultimately, immersive visualization with mixed simulators may aid in human understanding of large amounts of data. A B Figure 5 1 Student view in Augmented Anesthesia Machine Visualization and Interactive Debriefing system ( AAMVID). A) a student view from a magic lens B) A student mimics the collocated expert interaction.

PAGE 120

120 Figure 5 2 Real -world view of A) a user touching an incompetent inspiratory valve and B) the corresponding AAMVID view of an incompetent inspirato ry valve during AAR Figure 5 3 Past interaction event boxes are collocated with the real controls and describe past interactions. The boxes are connected with lines, denoting a chain of interaction events. B A

PAGE 121

121 Figure 5 4 The student can see what an ex pert was looking at, denoted by the large red spotlight 0 0.5 1 1.5 2 2.5 3 3.5 4 Fault Test Score Before AAR After AAR Figure 5 5 Understanding before and after collocated after action review (AAR) (p < .001). Standard error bars are shown. 0 0.5 1 1.5 2 2.5 3 3.5 4 Confidence Test Score Before AAR After AAR Figure 5 6 Confidence before and after collocated AAR (p < .001). Sta ndard error bars are shown

PAGE 122

122 Figure 5 7 A gaze map collocated with the machine. In this case, many students were looking at the flow meters during the fault test. This data can be interactively filtered using the slider controls in the top left. Fig ure 5 8 A heat -mapped (on frequency of interaction), directed graph of aggregate student interaction.

PAGE 123

123 Figure 5 9 The interaction graph is collocated with the machine. Educators can test the probability of interaction sequences, highlighted by the ic ons at the top.

PAGE 124

124 CHAPTER 6 THEORETICAL AND SOFT WARE FRAMEWORKS FOR MIXED SIMULATORS This chapter addresses the software engineering challenge of generalizing application specific software to a generic framework that supports efficient design and implement ation of new applications. Specifically, we present novel theoretical and software frameworks to support the design and implementation of mixed simulators. These frameworks can be used in conjunction to enable a n iterative design process that may make the engineering of mixed simulators more efficient. First, the theoretical framework offers engineers a classification system based on the principles of scaffolded learning with mixed simulators that will aid in the design process and offer engineering guidel ines for new domain -specific mixed simulators. Then, once engineers have determined the design of their mixed simulator, the software framework helps to make the implementation process more efficient. The software framework and the associated authoring too l build upon previous work from the area of multi -modeling [54] in using semantic networks to connect dynamic models. Specifically, our framework offers a semantic network code infrastructure that can define and organize the semantic links between the two fundamental parts of a mixed simulator: (1) the abstract model components and (2) the corresponding real world components. Using an example of cre ating a mixed simulator for a CRT monitor [70] this chapter demonstrates the novel mixed simulator design and implementation process. Note that the theoretical framework has been published in the Journal of Computers and Graphics in 2009 [59] 6.1 Theoretical Framework This section presents a theoretical framework for classifying mixed simulators with the goal of aiding software engineers in designing mixed simulators. To support educa tional

PAGE 125

125 application design specifically, we based this framework on the principals of scaffolding, which can be reviewed in Chapter 2. The central principal of scaffolding is that of decreasing directed instruction over time to enable students to increase t heir ability to learn independently. The results of our study in Chapter 4 suggest that different simulation types offer the user different levels of scaffolding. Therefore, when designing a technology mediated scaffolding system, such as a mixed simulator or an abstract simulator, it is important to consider the level of scaffolding desired and the types of concepts to be scaffolded. To address this need, we present a theoretical framework called the Scaffolding Space Continuum, which extends the Virtualit y Continuum (See Chapter 2 for additional details) and offers a classification system specifically for technology -mediated scaffolded learning environments, such as mixed simulators. 6.1.2 The Scaffolding Space Continuum We propose a Scaffolding-Space Cont inuum to classify technology-mediated scaffolded learning environments, such as mixed simulators. The Scaffolding-Space Continuum consists of three continuums of classification ( Figure 6 1): Virtuality spanning from real to virtual, Information spannin g from concrete to abstract, and Interaction also spanning from concrete to abstract. As seen in Figure 6 1, each of the scaffolding systems ( e.g., the AAM -MR from Chapter s 3 and 4) presented in this paper can be classified by these continuums in this t heoretical framework. 6.1.2.1 Virtuality c ontinuum T he Virtuality Continuum defines a taxonomy of MR systems and displays [48] For more details on the virtuality continuum, see Chapter 2. In short, the Virtuality Continuum ranges from real to virtual loosely referring to the proportion of real and virtual objects in an environment. For example, if there are more real objects than virtual objects in the environment, the corresponding display may be considered an Augmented Reality display. In contrast, if there are

PAGE 126

126 more virtual objects than real objects, the di splay would be classified as Augmented Virtuality. The scaffo lding space continuum includes the virtuality continuum to help engineers design a system with the appropriate display ( e.g., monitor, see through magic lens) for the scaffolding needs of the application. 6.1.2.2 Information c ontinuum The information conti nuum varies from concrete information to abstract information. We reiterate the definition of abstract conceptualization from Chapter 2: thinking about, analyzing, or systematically planning, rather than u sing sensation as a guide. [35] Representations th at teach these types of skills ( e.g., understanding the internal dynamics of a car engine) would be considered abstract. In contrast, concrete refers to tangible, felt qualities of the world, relying on our senses and immersing ourselves in concrete reali ty. [35] Representations that teach these types of skills ( e.g., driving a car) would be considered concrete. A system will be located in a specific place along the continuum based on the proportion of abstract and concrete information being presented by t he environment. For example, we might consider the AAM MR representation to be slightly more concrete than abstract ( Figure 6 1). When designing educational applications, the scaffolding space continuum classification system can help engineers to define the needed level of abstraction for the information representation. 6.1.2.3 Interaction c ontinuum The interaction continuum varies from concrete to abstract. This continuum refers to the abstractness of the interfaces. Factors such as the interfaces general ity ( e.g., a mouse is a very general interface) and the dimensionality ( e.g., 3D interaction is usually considered more concrete than 2D) can aid in classifying an interfaces level of abstractness or concreteness. For example, the VAMs mouse interface is considered a very abstract interface, whereas the AAMs anesthesia machine is considered a very concrete interface ( Figure 6 1). When designing a mixed

PAGE 127

127 simulator to meet the specific scaffolding needs of an application, it is important for engineers to co nsider the type of interface needed. The scaffolding space continuum can help engineers to consider this and compare the classifications of potential interfaces as in Figure 6 1. 6.1.3 Differences Between Continuums It may not be obvious that the aforement ioned continuums are in fact different continuums. Based on the examples given throughout this dissertation, one might conclude that concrete is always congruent to real. To understand why this is not the case, consider driving in a virtual car simulation. All objects represented in the simulation are virtual objects, but the information presented is concrete. Similarly, one could use a mouse (an abstract interface) to interact with either virtual objects or real objects. The latter is rarely encountered bu t definitely possible. There are many more examples but the main point is that the three continuums differ in the types of systems that can be represented. However, in many cases of practical implementation, systems correlated across continuums ( e.g., info rmation and virtuality: abstract representations may be easier to implement in virtual reality). 6.1.4 Scaffolding by Movement Along Continuums One of the main principles of scaffolding is the importance of gradually fading instructional support as the lea rner increases in competence. In the Scaffolding Space Continuum, this fading can be represented as gradual movement along one or more of the continuums. For example, a scaffolding solution that fades abstract concepts might utilize multiple systems clas sified along the information continuum. Moreover, by classifying two systems with the Scaffolding-Space Continuum, it may become apparent whether there is a lack of scaffolds between the two systems. For example, if the VAM and the machine are classified w ith the Scaffolding -Space Continuum, it becomes obvious that there are large instructional gaps along all three of the continuums. That is, using the continuums as a visual representation of a

PAGE 128

128 scaffolding solution, educators may be able use the Scaffolding Space continuum as a guide to determine intermediary systems ( e.g., the AAM) that could provide the needed instructional supports to fill the gaps. Then, one can classify multiple systems that fade interaction, virtuality, or information, depending on t he class of instructional support that needs to be faded. In general, the purpose of the scaffolding space continuum is to enable engineers to more effectively design a mixed simulator that is classified to accommodate the needed levels of scaffolding. 6. 1.5 Example Design Process: an Augmented CRT Monitor We demonstrate the usage of the scaffolding-space continuum through the example of designing a mixed simulator to augment a CRT monitor. In general, a mixed simulator consists of abstract simulator components and physical simulator components. As seen in Figure 6 2, the CRT monitor that we want to augment has both abstract and physical components. There are two main physical components: the monitor screen and the monitor casing that houses the internals. Also there are two main abstract components: an abstract simulation of the electron gun inside the monitor and an abstract simulation of the monitor screens raster scan behavior. In order to design an augmented CRT monitor, a designer could use the scaffolding space continuum to plan out the design. First, the designer takes each abstract and physical component and estimates its mapping on each of the continuums that make up the scaffolding space continuum ( Figure 6 3). For example, the CRT screen would be considered conc rete on the information continuum, concrete on the interaction continuum, and real on the virtuality continuum. Conversely, the electron gun simulator would be considered abstract on the information continuum, abstract on the interaction continuum ( i.e., likely mouse -based interaction), and virtual on the virtuality continuum.

PAGE 129

129 After mapping each of the components to the various continuums, a designer may better understand the scaffolding needs of the simulator. Observing where the component s were mapped on the continuums, it is obvious that there is a large scaffolding gap between the abstract simulator components and the physical simulator components. Depending on the scaffolding goals of the application the designer can then plan out where a proposed mixed simulator should be mapped on the continuums to support those goals. For example, the designer might converse with educational psychologists who decide that additional scaffolding would be most helpful halfway between each of the abstract and physical simulation components. The designer could then use the continuums as a guide to design a representation that enables some real interaction with the monitor ( e.g., manually changing the refresh rate by pressing the buttons on the monitor) and use a virtual overlay facilitated with a mixed reality (MR) display such as a magic lens. 6.2 Software Framework Once the system has been designed using the scaffolding space continuum, engineers can move on to implementation. To support this process, the s oftware framework presented in this section offers engineers a semantic network based code infrastructure that will aid the implementation of the designs. The software framework can ultimately be integrated with other types of frameworks ( e.g., rendering e ngines, simulation libraries) for delivery of applications. In fact, all the mixed simulators presented in this dissertation were built with the aid of open source frameworks such as OpenGL [77] for computer g raphics, OpenCV [8] for computer vision, and VRPN [23] for networking. However, until now, there was no framework for the combination of abstract and physical simulations. To address this deficiency and make the implementation process more efficient, this section presents a generic softwa re framework based on semantic networking.

PAGE 130

130 In previous sections, the presented mixed simulators were each engineered with a different, specialized code base. For example, the AAM MR contains code that effectively synchronizes and registers the abstract flo wmeters with the physical flowmeters and the corresponding 3D models. This code is specific to the AAM and cannot easily be reused in other mixed simulators. It would be more efficient for engineers if mixed simulators coul d be created from a reusable and generalized code base. Then engineers could use this code base to connect the various models used in the abstract, physical, and mixed simulations ( i.e., model generation collocation). 6.2.1 Generalizing Mixed Simulators In order to generalize mixed simul ators, the fundamental features of mixed simulators must be identified. The fundamental software features that all mixed simulators share are the semantic links (or lack thereof) between the abstract simulation components and the corresponding physical sim ulation components. For example, the VAM flowmeters are an abstraction of the physical flowmeters. The code for this is specific to the AAM but all mixed simulators share the need for this type of link. The programmer (i.e., modeler ) must implement these types of semantic links specifically for each mixed simulator. The general framework presented here is engineered to support programmers in designing and implementing the numerous semantic links (i.e., a semantic network) that describe a mixed simulator. Building upon the previous work of Park and Fishwick [53, 54] we designed a software framework and an aut horing tool that generate a semantic network to represent the semantic links between the abstract, physical, and geometric models. The functionality of the framework and authoring tool are demonstrated using the simple example of a mixed simulator for a CR T monitor.

PAGE 131

131 6.2.2 Semantic Network -Based Architecture A semantic network can be thought of as a directed or undirected graph in which the vertices are concepts ( e.g., abstract, concrete) and the edges are descriptions of semantic relationships ( e.g., x is a type of y, x is an abstraction of y) [16] In the case of a mixed si mulator such as the AAM, the VAM components and the physical components would be vertices. The semantic relationships (e.g., is an abstraction of) between the VAM components and physical components would be represented by the edges of a directed graph. T he presented framework is implemented in C++, although the overall semantic networking concept could be implemented in most progra mming languages. Specifically, we used the Standard Template Library (STL) to support the creation of semantic networks in pro gram code. In the framework, the network is represented as a tree map. For example, in C++, the network is defined as: std::map>> semantic_network; The first integer is the index of a 3D model, which points to a m ap with a string index that represents t he semantic relationship ( e.g., is an abstraction of ). It is important to note that aside from abstraction, any other semantic relationship ca n be represented here such as is a part of or is a type of. The seco nd map then points to a dynamic array ( i.e., a vector in STL) that contains integer indexes of other 3D models. A vector is used here since a single object could be an abstraction of multiple other objects. Then at runtime, the map -based semantic network c an be accessed and modified in real -time. For exporting to disk and importing into other programming environments, the framework uses eXtensible Markup Language (XML) to represent the semantic networ k [39] For example, would read, crt_inside.3ds is an abstraction of crtmonitor.3ds, where the 3ds files refer to specific 3D models on disk. The

PAGE 132

132 start -tag ( e.g., abstraction) represents the semantic relationship. The attribute -names ( e.g., object1) represent the models that are being semantically linked. Note that the direction of the link is from object1 to object2. 6.2.3 Authoring Tool An authoring tool based on the framework was implemented to aid programmers in creating the semantic net works. This tool offers interactive creation of semantic networks and 3D visual output of the network structure. To demonstrate the functionality of the tool, the creation of an example mixed simulator is described. Specifically, a CRT Monitor model is com bined with an abstract CRT monitor model to create the Augmented CRT Monitor mixed simulator (Figure 6 4). The Augmented CRT Monitor mixed simulator consists of four main components shown in Figure 6 5. To combine these scale and abstract 3D models, the us er interacts with a mouse to connect the models. The user clicks on the first desired object, then clicks on a semantic relationship button, and then clicks the second object. Then a semantic link is created between those objects. For example, the user fir st clicks on the abstract 3D model of the CRT monitor internals, then clicks the is abstraction of button, and finally clicks the CRT monitor model. This creates a semantic link between the abstract CRT monitor internal model and the scale CRT monitor 3D model. The authoring tool then visualizes this link by rendering an edge ( i.e., a red 3D line) between the two models. The authoring tool offers the is abstraction of and is part of buttons for convenience. The user can also create and assign custom s emantic relationships by typing a word into the provided edit box and then pressing the using the is ______ of button. Once the user is finished authoring, the network can be written to an XML file using the Save Network button. The final resulting netwo rk from the CRT monitor example is:

PAGE 133

133 6.2.4 Integration with Renderers and Simulators Once the semantic network ha s been defined and written to XML, it can be imported into other visualization and rendering environments. For example, the XML can be easily integrated into XML -based scene graph representations. Specific rendering properties can be defined for the seman tics. In the CRT example, the abstract objects can be associated with a rendering algorithm that registers the abstract objects to the scale 3D models, which can then be registered to corresponding real objects for an augmented view. This type of XML based architecture gives the user the freedom to apply any rendering algorithm for each type of semantic relationship. For example, this enables the modeler to define focus + context rendering properties for enhanced visualization effectiveness as in [47] This semantic network approach makes organizing an d reusing code more efficient. Using the semantic network, users can easily change out the models and associated simulations while preserving existing code. Current mixed simulator code can then be more easily adapted to create new mixed simulators. The fr amework ultimately implements the concept of model generation collocation. That is, engineers can generate a single simulation model that can be easily linked to new visual representations and 3D models using the semantic network approach. 6.3 Chapter Summary This chapter proposed a general iterative process for the design and implementation of mixed simulators. First, designers use the theoretical framework to help them assess the scaffolding needs of an application, such as the CRT monitor simulator prese nted here. Then they may use this information as a guide for the implementation to define the specific displays, information representations, and interfaces that meet the scaffolding goals of the application.. Based on the proposed design, the software eng ineers can use the presented softwa re framework

PAGE 134

134 and authoring tool to efficiently implement the mixed simulator. The software framework aids the application programmer in creating the underlying semantic network that exists in all mixed simulators. It enab les engineers to easily define and author semantic links, such as abstraction. Ultimately, the framework represents the resulting semantic network in XML for ease of translation and code reuse in other programming environments. 6.4 Conclusions The theoreti cal and software frameworks presented in this chapter further generalize the mixed simulator approach. The frameworks extend this approach to a generic design process and a code infrastructure for more efficient mixed simulator application development. Mor eover, these frameworks expand upon the previous work in the area of multi -modeling by enabling the semantic linking of abstract simulation models to geometric models and ultimately to the corresponding real objects. Because the design and implementation o f mixed simulators can be generalized, we expect that effective mixed simulators can be efficiently created for other applications outside of the anesthesia training domain. Figure 6 1 T he three continuums that make up the ScaffoldingSpace Continuum. For examples, the continuums have been labeled with the systems presented in Chapter s 3 and 4.

PAGE 135

135 Figure 6 2 The components of the augmented c athode ray t ube ( CRT ) monitor. A) a scal e 3D model of the CRT monitor, B) a scale 3D m odel of the CRT monitor screen C) an abstract 3D model of the internals of the monitor, and D) and abstract 3D model of the screens raster scan function. Figure 6 3 T he existing CRT monitor components are mapped to the scaffolding space continuum. A) CRT Monitor Case, B) CRT Screen, C) electron gun simulator, D) raster scan simulator.

PAGE 136

136 Figure 6 4 T he augmented CRT monitor that was created using the software framework and authoring tool. Figure 6 5 Semantic l inks are visualized by 3D red lines (C) an abstract 3D mod el of the internals of the monitor is an abstraction of (A) a scale 3D model of the CRT monitor, and (D) the 3D model of the screens raster scan is an abstraction of (B) a scale 3D model of the CRT monitor screen.

PAGE 137

137 CHAPTER 7 SUMMARY AND FUTURE D IRECTIONS 7.1 Summary Mixed simulators effectively blend abstract models with physical objects through the combination of virtual and real environments in space and time. This combination compensates for low spatial cognition and augments the users ability to appl y their abstract knowledge in real world scenarios. These cognitive benefits of mixed simulators can enhance immersive information visualization and may effectively offer users real world context for visual perception of complex spatiotemporal data. In ge neral, mixed simulators combine abstract and physical simulation for a novel interaction and visualization approach that affords users significant cognitive benefits in simulation-based training. 7.2 Future Directions In the presented mixed simulator approach, we augmented the users perception of the real world with abstract visualization. As demonstrated by our studies [59] and others [15] abstract visualization has many cognitive benefits such as providing simplified and easier to understand visuals of complex phenomena. However, aside from visual feedback, abstraction has rarely been used to represent other m odalities of sensory feedback. A dditional abstract feedback to other senses may have cognitive benefits that have not yet been determined. Thus, we plan to engineer novel interfaces and displays that provide abstract feedback to multiple senses ( e.g., visu al, audio, haptic) and effectively contextualize this feedback with the real world. These multi -modal mixed simulators would offer new challenges to registration research and potentially offer increased cognitive benefits to the user. Specifically, the are a of immersive information visualization could benefit from multi modal mixed simulators In general, these types of multi -modal mixed simulators could afford

PAGE 138

138 users increased immersion and a better understanding of the thought processes and feelings of lar ge groups of people. For example, in the application of after action review, biometric sensors such as body temperature and heart rate could be integrated into the training system. Then during the debriefing, the reviewer could physically experience how us ers felt through haptic and audio interfaces a abstractly represented this aggregate biometric data. T his multi -modal approach may effectively improve perceptual augmentation in after action reviews and immersive visualization. Moreover, multi -modal intera ction can also include speech interaction which is yet another unexplored modality in mixed simulators. In the future we plan to integrate a virtual human ( e.g., a simulated patient or doctor for medical training) into the mixed simulator that will react to natural speech input through visualization, audio, and haptic feedback. Ultimately, this work could significantly impact the field of virtual human research and the known benefits of human virtual human interaction. O ne of the goals of the presented sof tware framework is to make mixed simulator application development more efficient. In the future, we plan to continue this work by implementing a generic mixed simulator for applied mixed simulation development For example, a user could interact with a tr acked anesthesia machine and based upon the tracked interaction input and output, the mixed simulator could generate a prototypical abstract model and register this model to the real o bject interactively. This future work could significantly improve the mi xed simulator implementation process as well as offer a novel dynamic modeling approach.

PAGE 139

139 REFERENCES [1] G. Allen and R. Smith, "After action review in military training simulations," Proc. Winter Simulation Conference (WSC '94) pp. 84 5 849, 1994. [2] R.T. Azuma, "A Survey of Augmented Reality," Presence: Teleoperators and Virtual Environments vol. 6, no. 4, pp. 355385, 1997. [3] J. Banks and J.S. Carson, Discrete -event System Simulation Prentice Hall, 2001. [4] M. Barnes, "Virtua l Reality and Simulation," Proc. Winter Simulation Conference (WSC '96) pp. 101110, 1996. [5] E.A. Bier, M.C. Stone, K. Pier, W. Buxton, and T.D. DeRose, "Toolglass and Magic Lenses: the See -through Interface," Proc. Computer Graphics and Interactive Te chniques (CGIT '93) pp. 7380, 1993. [6] M. Billinghurst and S. Weghorst, "The Use of Sketch Maps to Measure Cognitive Maps of Virtual Environments," Proc. Virtual Reality Annual International Symposium (VRAIS '95) pp. 40 47, 1995. [7] D. Bowman, A. Da tey, Y. Ryu, U. Farooq, and O. Vasnaik, "Empirical Comparison of Human Behavior and Performance with Different Display Devices for Virtual Environments," Proc. Human Factors and Ergonomics Society Annual Meeting Proceedings, Virtual Environments pp. 21342138, 2002. [8] G. Bradski, "The OpenCV Library," Doctor Dobbs Journal vol. 25, no. 11, pp. 120126, 2000. [9] F.P. Brooks, "What's Real about Virtual Reality?," IEEE Computer Graphics and Applications vol. 19, no. 6, pp. 1627, 1999. [10] F.E. Cellie r, Continuous System Modeling, Springer, 1991. [11] P.T. Chua, R. Crivella, B. Daly, N. Hu, R. Schaaf, D. Ventura, T. Camill, J. Hodgins, and R. Pausch, "Training for Physical Tasks in Virtual Rnvironments: Tai Chi," Proc. IEEE Virtual Reality (VR '03) p p. 8794, 2003. [12] J.B. Cooper, R.S. Newbower, C.D. Long, and B. McPeek, "Preventable Anesthesia Mishaps: a Study of Human Factors*," Quality and Safety in Health Care vol. 11, no. 3, pp. 277282, 2002. [13] Department of the Army, Washington D.C., Tr aining Circular 25-20:A Leader's Guide To After Action Reviews September 1993.

PAGE 140

140 [14] I. Fischler, S. Foti, and S. Lampotang, "Simulation and the Cognitive Science of Learning: Assessing the Virtual Anesthesia Machine (VAM)," Proc. Partnership in Global Le arning Conf. Consolidating eLearning Experiences 2005. [15] I.S. Fischler, C.E. Kaschub, D.E. Lizdas, and S. Lampotang, "Understanding of Anesthesia Machine Function Is Enhanced With a Transparent Reality Simulation," Simulation in Healthcare vol. 3, no 1, pp. 26 32, 2008. [16] P.A. Fishwick, Simulation Model Design And Execution: Building Digital Worlds Prentice Hall, 1995. [17] P.A. Fishwick, "Toward an Integrative Multimodeling Interface: A Human -Computer Interface Approach to Interrelating Model Structures," Simulation vol. 80, no. 9, pp. 421, 2004. [18] C. Furmanski, R. Azuma, and M. Daily, "Augmented-reality Visualizations Guided by Cognition: Perceptual Heuristics for Combining Visible and Obscured Information," Proc. Int'l. Symp. Mixed and A ugmented Reality (ISMAR '02) pp. 215320, 2002. [19] R.L. Goldstone and J.Y. Son, "The transfer of scientific principles using concrete and idealized simulations," The Journal of the Learning Sciences vol. 14, no. 1, pp. 69110, 2005. [20] H. Grant and C.K. Lai, "Simulation Modeling with Artificial Reality Technology (SMART): an Integration of Virtual Reality and Simulation Modeling," Proc. Winter Simulation Conference (WSC '98) 1998. [21] M. Hegarty, D.R. Montello, A.E. Richardson, T. Ishikawa, and K Lovelace, "Spatial Abilities at Different Scales: Individual Differences in Aptitude test Performance and Spatial layout Learning," Intelligence vol. 34, no. 2, pp. 151176, 2006. [22] J. Hopkins and P.A. Fishwick, "Synthetic Human Agents for Modeling and Simulation," Proceedings of the IEEE in special issue Agent -Based Modeling and Simulation: Exploiting the Metaphor vol. 89, no. 2, pp. 131147, 2001. [23] T.C. Hudson, A. Seeger, H. Weber, J. Juliano, and A.T. Helser, "VRPN: a device independent, net work transparent VR peripheral system," Proc. Symp. Virtual Reality Technology (VRST '01) pp. 55 61, 2001. [24] B.E. Insko, M. Meehan, M. Whitton, and F.P. Brooks Jr, "Passive Haptics Significantly Enhances Virtual Environments," Proc. Presence Workshop 2001. [25] H. Ishii and B. Ullmer, "Tangible bits: Towards Seamless Interfaces Between People, Bits and Atoms," Proc. Conf. Human Factors in Computing Systems (SIGCHI '97) pp. 234241, 1997.

PAGE 141

141 [26] S.L. Jackson, J. Krajcik, and E. Soloway, "The Design of Guided Learner adaptable Scaffolding in Interactive Learning Environments," Conf. Human Factors in Computing Systems (SIGCHI '98) no., pp. 187194, 1998. [27] M.J. Jacobson and A. Archodidou, "The Design of Hypermedia Tools for Learning: Fostering Conc eptual Change and Transfer of Complex Scientific Knowledge," Journal of the Learning Sciences vol. 9, no. 2, pp. 145199, 2000. [28] D.H. Jonassen, "Instructional Design Models for Well -structured and III -structured Problem -solving Learning Outcomes," Ed ucational Technology Research and Development vol. 45, no. 1, pp. 6594, 1997. [29] M.A. Just and P.A. Carpenter, "A capacity theory of comprehension: individual differences in working memory," Psychological review vol. 99, no. 1, pp. 122149, 1992. [3 0] D. Kalkofen, E. Mendez, and D. Schmalstieg, "Interactive Focus and Context Visualization for Augmented Reality," Proc. Int'l. Symp. Mixed and Augmented Reality (ISMAR '07) pp. 1 10, 2007. [31] H. Kato and M. Billinghurst, "Marker Tracking and HMD Cali bration for a Video -based Augmented Reality Conferencing System," Proc. the 2nd IEEE and ACM International Workshop on Augmented Reality (IWAR '99) pp. 85 94, 1999. [32] A. Kay, "User Interface: A Personal View", The Art of Human-Computer Interface Desig n ed. B. Laurel, Addison Wesley Professional, pp. 191 207, 1990. [33] B.W. Knerr, D.R. Lampton, G.A. Martin, D.A. Washburn, and D. Cope, "Developing an After Action Review System for Virtual Dismounted Infantry Simulations," Proc. Interservice Industry T raining Simulation & Education Conference (I/ITSEC '02) 2002. [34] D.A. Kolb, Experiential Learning: Experience as the Source of Learning and Development PrenticeHall, 1984. [35] D.A. Kolb, R.E. Boyatzis, and C. Mainemelis, "Experiential Learning Theo ry: Previous Research and New Directions," Perspectives on thinking, learning, and cognitive styles. The educational psychology series no., pp. 227 247, 2001. [36] S. Lampotang, D.E. Lizdas, N. Gravenstein, and E.B. Liem, "Transparent Reality, a Simulati on Based on Interactive Dynamic Graphical Models Emphasizing Visualization," Educational Technology vol. 46, no. 1, pp. 55 59, 2006. [37] S. Lampotang, D.E. Lizdas, E.B. Liem, and J.S. Gravenstein, "The Anesthesia Patient Safety Foundation Anesthesia Machine Workbook v1.1a" Copyright retrieved on from http://vam.anest.ufl.edu/members/workbook/apsf -workbook english.html

PAGE 142

142 [38] A.M. Law and W.D. Kelton, Simulation Modeli ng and Analysis McGraw Hill Higher Education, 2000. [39] F. Ling, C. Elizabeth, and D. Tharam, "A Semantic Network based Design Methodology for XML Documents," ACM Trans. Inf. Syst. vol. 20, no. 4, pp. 390421, 2002. [40] M.A. Livingston, "Quantificati on of Visual Capabilities using Augmented Reality Displays," Proc. Int'l. Symp. Mixed and Augmented Reality (ISMAR '06) pp. 3 12, 2006. [41] B. Lok, S. Naik, M. Whitton, and F.P. Brooks Jr, "Experiences in Extemporaneous Incorporation of Real Objects in Immersive Virtual Environments," Proc. Beyond Glove and Wand Based Interaction Workshop, IEEE Virtual Reality pp. 107110, 2004. [42] B.C. Lok, "Toward the Merging of Real and Virtual Spaces," Communications of the ACM vol. 47, no. 8, pp. 4853, 2004. [43] J. Looser, M. Billinghurst, and A. Cockburn, "Through the Looking Glass: the Use of Lenses as an Interface Tool for Augmented Reality Interfaces," Proc. 2nd Int'l. Conf. Computer Graphics and Interactive Techniques in Australasia and South East Asia, pp. 204211, 2004. [44] R. Macredie, S.J.E. Taylor, X. Yu, and R. Keeble, "Virtual Reality and Simulation: an Overview," Proc. Winter Simulation Conference (WSC '96) pp. 669674, 1996. [45] S.E. Mattsson, H. Elmqvist, and M. Otter, "Physical System Mode ling with Modelica," Control Engineering Practice vol. 6, no. 4, pp. 501510, 1998. [46] R.E. Mayer, "Should there be a three -strikes rule against pure discovery learning," American Psychologist vol. 59, no. 1, pp. 14 19, 2004. [47] E. Mendez, D. Kalko fen, and D. Schmalstieg, "Interactive Context -driven Visualization Tools for Augmented Reality," Proc. Int'l. Symp. Mixed and Augmented Reality (ISMAR '06) pp. 209218, 2006. [48] P. Milgram and F. Kishino, "A Taxonomy of Mixed Reality Visual Displays," IEICE Transactions on Information Systems vol. 77, no. 12, pp. 13211329, 1994. [49] J.F. Morie, K. Iyer, D.P. Luigi, J. Williams, A. Dozois, and A.S. Rizzo, "Development of a Data Management Tool for Investigating Multivariate Space and Free Will Experi ences in Virtual Reality," Applied Psychophysiology and Biofeedback vol. 30, no. 3, pp. 319331, 2005. [50] D.A. Norman, The Psychology of Everyday Things Basic Books New York, 1988.

PAGE 143

143 [51] R. Oliver and J. Herrington, "Exploring Technology-Mediated Lear ning from a Pedagogical Perspective," Interactive Learning Environments vol. 11, no. 2, pp. 111126, 2003. [52] M. Otter, H. Elmqvist, and F.E. Cellier, "Modeling of Multibody Systems with the Object -oriented Modeling Language Dymola," Nonlinear Dynamics vol. 9, no. 1, pp. 91 112, 1996. [53] M. Park and P.A. Fishwick, "An Integrated Environment Blending Dynamic and Geometry Models," AI, Simulation and Planning In High Autonomy Systems vol. 3397, no. 1, pp. 574 584, 2004. [54] M. Park and P.A. Fishwick, "Integrating Dynamic and Geometry Model Components through Ontology Based Inference," Simulation vol. 81, no. 12, pp. 795, 2005. [55] M. Pidd, "Model development and HCI," Proc. Winter Simulation Conference (WSC '06) pp. 681686, 1996. [56] A.A.B. Pr itsker, The GASP IV Simulation Language John Wiley and Sons, Inc., 1974. [57] J. Quarles, S. Lampotang, I. Fischler, P. Fishwick, and B. Lok, "Collocated AAR: Augmenting After Action Review with Mixed Reality," Proc. Int't. Symp. Mixed and Augmented Real ity (ISMAR '08) pp. 107116, 2008. [58] J. Quarles, S. Lampotang, I. Fischler, P. Fishwick, and B. Lok, "A Mixed Reality Approach for Merging Abstract and Concrete Knowledge," Proc. IEEE Virtual Reality (VR '08) pp. 27 34, 2008. [59] J. Quarles, S. Lam potang, I. Fischler, P. Fishwick, and B. Lok, "Scaffolded Learning with Mixed Reality," Computers & Graphics vol. 33, no. 1, pp. 3446, 2008. [60] J. Quarles, S. Lampotang, I. Fischler, P. Fishwick, and B. Lok, "Tangible User Interfaces Compensate for Lo w Spatial Cognition," Proc. Int'l. Symp. 3D User Interfaces (3DUI '08) pp. 11 18, 2008. [61] A.B. Raij and B.C. Lok, "IPSViz: An After -Action Review Tool for Human -Virtual Human Experiences," Proc. IEEE Virtual Reality (VR '08) pp. 91 98, 2008. [62] B. Schwerdtfeger and G. Klinker, "Supporting order picking with Augmented Reality," Proc. Int'l. Symp. on Mixed and Augmented Reality (ISMAR '08) pp. 9194, 2008. [63] H. Shim and P. Fishwick, "Enabling the Concept of Hyperspace by Syntax/Semantics Co Loca tion within a localized 3D Visualization Space," Proc. Human -Computer Interaction in Cyberspace: Emerging Technologies and Applications 2007.

PAGE 144

144 [64] B. Shneiderman, S.K. Card, and J.D. Mackinlay, Readings in Information Visualization: Using Vision to Think Morgan Kaufmann, 1999. [65] T. Sielhorst, T. Blum, and N. Navab, "Synchronizing 3D Movements for Quantitative Comparison and Simultaneous Visualization of Actions," Int'l. Symp. Mixed and Augmented Reality (ISMAR '05) no., pp. 38 47, 2005. [66] R.J. S piro, P.J. Feltovich, M.J. Jacobson, and R.L. Coulson, "Cognitive Flexibility, Constructivism, and Hypertext: Random Access Instruction for Advanced Knowledge Acquisition in Ill -structured Domains," Educational Technology vol. 31, no. 5, pp. 2433, 1991. [67] Studiocode Business Group, Studiocode Copyright 2005 Studiocode Business Group, retrieved on April 28, 2008 from http://studiocodegroup.com [68] I. Sutherland, "The Ultimate Display," Proceedings of the IFIP Congress vol. 2, no. 1, pp. 506508, 1965. [69] J. Sweller, J.J.G. Van Merrienboer, and F. Paas, "Cognitive architecture and instructional design," Educational psychology review vol. 10, no. 3, pp. 251296, 1998. [70] J. Tyson and C. Carmack, "How CRT Monitors Work" Copyright 2000 Howstuffworks.com, retrieved on March 25, 2009 from http://computer.howstuffworks.com/monitor7.htm [71] B. Ullmer and H. Ishii, "Emerging Frameworks for Tangible User Interfaces," IBM Systems Journal vol. 39, no. 3, pp. 915931, 2000. [72] University of Florida Department of Anesthesiology, Virtual Anesthesia Machine Copyright 2009 University of Florida, retrieved on February 6, 2009 from http://vam.anest.ufl.edu [73] A. van Rhijn and J.D. Mulder, "Optical Tracking and Calibration of Tangible Interaction Devices," Proc. Immersive Projection Technology and Virtual Environments Workshop, 2005. [74] J. Viega, M.J Conway, G. Williams, and R. Pausch, "3D Magic Lenses," Proc. ACM Symp. User Interface Software and Technology pp. 51 58, 1996. [75] C. Ware and J. Rose, "Rotating Virtual Objects with Real Handles," ACM Transactions on Computer -Human Interaction (TOCHI ), vol. 6, no. 2, pp. 162180, 1999. [76] L.E. Whitman, M. Jorgensen, K. Hathiyari, and D. Malzahn, "Virtual Reality: Its Usefulness for Ergonomic Analysis," Proc. Winter Simulation Conference (WSC '04) pp. 17401745, 2004.

PAGE 145

145 [77] M. Woo, J. Neider, T. Da vis, and D. Shreiner, OpenGL Programming Guide: The Official Guide to Learning OpenGL, Version 1.2, Addison -Wesley Longman Publishing Co., Inc., 1999. [78] B. Wu, R.L. Klatzky, D. Shelton, and G.D. Stetten, "Psychophysical Evaluation of In-situ Ultrasound Visualization," IEEE Transactions on Visualization and Computer Graphics vol. 11, no. 6, pp. 684693, 2005.

PAGE 146

146 BIOGRAPHICAL SKETCH John Quarles was born in Fort Worth, Texas in 1982 to C.A. and Sonja Quarles. When he was 17, he met Keira Young, who la ter became his wife in 2004. In 2000, he attended the University of Texas at Austin. Four year s later, he received a B.A. in c omputer s cience and graduated with special honors. Shortly thereafter, John was granted a four year Alumni Fellowship for Ph.D. st udies at the University of Florida. At the University of Florida, John joined the Virtual Experiences Research group led by Dr. Benjamin Lok. Under the supervision of Dr. Lok, John researched novel applications of mixed and virtual reality to the areas of modeling and simulation, visualization, and medical training. This work led to significant publications, invited presentations, awards, and recognition in both academia and industry. John has authored over 7 publications in highly competitive international conferences and journals. In his 5th year as a Ph.D. candidate for the academic year of 20082009, he received the Link Foundation Fellowship from the Institute of Simulation Technology at the University of Central Florida. Also, in 2008, he received 3rd prize for best scientific exhibit at the American Society of Anesthesiologists Conference in Orlando, FL. John received his Ph.D. in May of 2009. John and his collaborators are currently in the process of patenting the mixed simulator. Several companies in the medical industry have shown interest in licensing the mixed simulator technology. In the near future, John hopes to continue his work as a tenure track assistant professor at a research institution.